Since AWS re:Invent 2020 was 100% virtual, I got opportunities to consume more content than I typically do at the conference but this came at the cost of missing out on opportunities to meet with new people and those I typically see every year at this time. The nice thing is that more people from around the world got an opportunity to learn but – all that said – I hope we’re in person in Las Vegas for AWS re:Invent 2021.
This year, there were 145 announcements of new services or features at the conference (and more at “pre:Invent” in November). Instead of providing an overview of all of the announcements from re:Invent 2020, I decided to choose the top 10 that I believe are most relevant to accelerating the speed and safety of delivering software to end users. That is, those most relevant to the DevOps and DevSecOps communities. They are:
- AWS Audit Manager – Continuously audit your AWS usage to simplify how you assess risk and compliance.
- AWS Proton – Automated management for container and serverless deployments.
- Amazon DevOps Guru – ML-powered cloud operations service to improve application availability.
- Amazon SageMaker Pipelines – First purpose-built CI/CD service for machine learning.
- AWS Fault Injection Simulator – Improve resiliency and performance with controlled chaos engineering.
- AWS CloudFormation Modules – Building blocks that can be reused across multiple CloudFormation templates and is used just like a native CloudFormation resource.
- Amazon CodeGuru – Security Detector – Incorporating security in code-reviews using Amazon CodeGuru Reviewer.
- Amazon ECS Deployment Circuit Breaker – Automatically roll back unhealthy service deployments without the need for manual intervention.
- AWS Network Firewall – Deploy network security across your Amazon VPCs with just a few clicks.
- AWS Service Catalog AppRegistry – Repository of your applications and associated resources.
In December 2020, I provided my initial reaction to the first two weeks of re:Invent announcements in a video available here. In this post, I provide much more depth to cover the top 10 DevOps and DevSecOps services and features announced at or before the conference.
When people think of the cloud, they’re often referring to a cluster of centralized data centers in a geographic location (such as us-east-1 which is in Northern Virginia). As of January 2021, there are 24 of these AWS regions in which each region might have dozens of data centers which define a handful of availability zones [Source]. As of 2020, AWS is making clear that they want the cloud to be everywhere, not just in a centralized data center. This might be the edge, ECS/EKS Anywhere, or machine to machine – to name a few. This is an interesting direction that AWS is going in terms of everything providing mini clouds like Greengrass, Outposts, and other services that make the idea of the cloud in centralized data centers less and less relevant.
Another takeaway for me is the ever increasing paradox of choice for builders on AWS. It’s starting to feel familiar to the experience of searching for something simple like a garden rake on Amazon.com and you’re presented with over 1,000 choices. You almost need a concierge for AWS now. Even as an experienced AWS practitioner, it is becoming increasingly difficult to determine which are the best set of services and tools to use in designing your solution architecture.
We continue to see an emphasis on releasing managed services that reduce or eliminate the undifferentiated heavy lifting such as availability and scalability of the service for you. All of the services and features I cover in this post fit into this category.
Lastly, I am happy to see that many more of the generally available services come with CloudFormation support on the day they’re released. AWS is listening to its customers!
DevOps and DevSecOps
DevOps is about accelerating speed and confidence of feedback between end users and engineers. This is done by regularly applying organizational, cultural, process, and tooling improvements within a team and organization. DecSecOps is a facet of DevOps that focuses on security and how to accelerate this speed safely. Teams often do this by automating and integrating security into every aspect of their software delivery process. Figure 1 shows how Amazon incorporates these checks into their software delivery process so that feedback between users and engineers is fast and safe.
Figure 1 – How Amazon is Automating safe, hands-off deployments
The focus at Mphasis Stelligent is in helping customers accelerate the speed and confidence of delivering their software to production. Consequently, the top 10 AWS re:Invent announcements I cover in this post are the DevSecOps-related features that best help safely accelerate these feedback loops. Moreover, in relevant cases, I am providing examples of how you might automate the provisioning of some of these new services and features and releasing changes through a deployment pipeline.
AWS Audit Manager
With AWS Audit Manager, you can continuously audit your AWS usage to simplify how you assess risk and compliance. It’s a fully-managed service that continuously collects data to help prepare for audits and integrates with over 155 AWS services to provide a single pane of glass on audit-related activities. Audit Manager uses established frameworks for PCI, HIPAA, and others. Essentially, it can help you always be audit ready – whether it’s an internal or external audit. It is generally available.
Figure 2 shows how you can select a prebuilt framework or custom framework, define the assessment scope, and activate it. What’s more, you can generate assessments reports to provide to auditors.
Figure 2 – How AWS Audit Manager Works [Source]
Figure 3 shows how you can use the AWS Audit Manager Console to configure an assessment. You can choose one of the – currently – 29 industry frameworks (e.g., CIS, AWS, PCI, etc.). You select the AWS service(s) you wish to audit and run it. It takes about 24 hours to generate a list of compliance checks along with evidence folders indicating why that particular check failed.
Figure 3 – Creating an Assessment in AWS Audit Manager Console [Source]
Audit Manager integrates with AWS Security Hub, AWS Config, AWS Control Tower, and AWS CloudTrail.
AWS::AuditManager::Assessment provides the ability to automate the provisioning of an assessment in Audit Manager. I have provided an example that works below. It assumes that you have already created an S3 bucket and an IAM Role that includes the same name as the stack you launch in CloudFormation.
AWS Audit Manager is a regional service. You might deploy it on a per region basis or as part of an overall AWS account or AWS Organizations bootstrapping setup. For example, you might use AWS CodePipeline to use CloudFormationStackSet and CloudFormationStackInstance actions to deploy a CloudFormation StackSet across multiple regions and multiple AWS accounts.
A resource assessment collects, stores, and manages evidence in the form a resource snapshot configuration, user activity, or a compliance check result. AWS Audit Manager currently charges $1.25 per 1,000 resource assessments per account per region. For more information, see AWS Audit Manager Pricing.
AWS Proton provides automated management for container and serverless deployments. As of January 2021, it is in general preview. AWS describes Proton as a way to manage your infrastructure so developers can focus on coding. Figure 4 illustrates how it works.
Figure 4 – How AWS Proton Works [Source]
AWS mentions that Proton can be used by “…Platform engineering teams…to connect and coordinate all the different tools needed for infrastructure provisioning, code deployments, monitoring, and updates.”
Proton provides the ability to create pre-baked deployment patterns via templates. It’s geared towards enterprise teams that might have a “platform team” (there are many other names for these centralized teams in enterprises) that provide common patterns for deployment of serverless and container-based applications.
Over the years, Mphasis Stelligent has helped enterprise customers implement similar patterns with something called “Pipeline Factories”. You can think of them as “pipelines that generate pipelines for tens or hundreds of teams”.
Proton helps customers define templates that create the structure for applying common deployment patterns using existing AWS services such as AWS CodePipeline, AWS Service Catalog, and AWS CloudFormation – among what will be many other service integrations. These templates define how deployments behave across multiple teams.
In the short term, I think people will be confused by all the seemingly different options for deployment thinking that Proton is yet another; however, I expect – or my hope is – that Proton helps to reduce some of this complexity in enterprises by defining deployment best practices and guardrails as code that utilizes existing AWS services.
These are the four steps for launching environments and/or services using AWS Proton.
- Create Proton templates (environment or service).
- Set up for Proton (e.g., an IAM Role).
- Create and deploy an environment based on an environment template.
- Create and deploy a service based on a service template.
An environment defines a set of shared resources and policies that apply to all of the services deployed to it. A service defines how your application is run within an environment [Source].
Figure 5 shows the steps in using the AWS Console to deploy and environment and service using AWS Proton.
Figure 5 – Configuring AWS Proton using the Console [Source]
Since AWS Proton is still in preview, you need to install the Proton APIs in order to run commands. You will:
- Set up an Amazon S3 bucket
- Set up a GitHub repository connection
- Install the AWS CLI Proton API
The steps for installing the Proton APIs to run the CLI are documented in this blog post here. There’s currently no CloudFormation support but, hopefully, when the service is generally available, it will be included. Once installed, you can run the following command similar to the snippet below to create an environment template using Proton [Source].
AWS has provided example proton templates at aws-proton-sample-templates.
There is no additional charge for AWS Proton. You pay for AWS resources you create to store and run your application. There are no minimum fees and no upfront commitments. You pay for the resources that are provisioned through Proton such as S3 buckets, EC2 instances, containers, etc.
Amazon DevOps Guru
Amazon DevOps Guru is a machine learning (ML) powered cloud operations service to improve application availability. As of January 2021, this service is currently in preview.
The ML models that DevOps Guru uses are based on years of experience from Amazon running tens of thousands of applications at scale. DevOps Guru regularly looks for anomalous behavior such as increased latency, error rates, and resource constraints that could lead to possible service outages or disruptions. Since DevOps Guru can be run at all times, it can regularly report on anomalous behavior through its dashboard or notifications. It provides reactive and proactive Insights along with the mean time to recovery of CloudFormation stacks.
With Proactive Insights – for example – you can be made aware of issues such as memory utilization before they become a problem that affect your end users. Whereas there are numerous ways to assess the operational health of facets of your AWS Accounts (e.g., Config, CloudWatch, Trusted Advisor, and others).
DevOps Guru focuses on your application and infrastructure health by looking at how the applications are running in production. Figure 6 shows how Amazon DevOps Guru couples machine learning models with CloudWatch, Config, CloudTrail, X-Ray data to analyze the provisioned resources from selected CloudFormation stacks to provide recommendations.
Figure 6 – Amazon DevOps Guru – How It Works [Source]
Figure 7 shows the process for using the AWS Console to enable Amazon DevOps Guru. The steps are pretty straightforward. You identify which AWS resources you want to analyze, optionally select a SNS topic to receive operational notifications, and click Enable to start the service.
Figure 7 – Enabling Amazon DevOps Guru from the AWS Console
In Figure 8, you see an example of a populated DevOps Guru dashboard that provides Insights to improve your application performance.
Figure 8 – Amazon DevOps Guru Dashboard [Source]
There are two DevOps Guru resources supported by AWS CloudFormation. They are AWS::DevOpsGuru::NotificationChannel and AWS::DevOpsGuru::ResourceCollection. NotificationChannel allows you to set up an SNS channel to receive notifications on important DevOps Guru events. ResourceCollection defines a collection of supported resources that DevOps Guru will analyze for anomalous behavior. For example, a collection of CloudFormation stacks. In the snippet below [Source], you can see a simple example of provisioning a DevOps Guru Resource Collection in CloudFormation.
You might have CodePipeline deploy a CloudFormation stack that provisions Amazon DevOps Guru for your region(s) using a CodePipeline CloudFormation Deploy Provider.
There are three pricing dimensions for DevOps Guru. Currently, DevOps Guru charges $0.0028/hour for Lambda Functions and S3 buckets and $0.0042/hour for the remaining resources such as EC2 instances, ECS Service, and ELB. You also pay $0.000040 for each DevOps Guru API call (e.g. DescribeAccountOverview, ListInsights). This translates to $0.40 for 10K API calls. See Amazon DevOps Guru Pricing for more details.
Amazon SageMaker Pipelines
Amazon SageMaker Pipelines is the first purpose-built continuous delivery service for machine learning. It’s built for both data scientists and the Operations teams that support them. It allows you to get quicker and more useful feedback when building and tuning your machine learning models. It’s integrated into SageMaker and SageMaker Studio. The purpose is to ensure only that only approved models get deployed to production.
SageMaker Pipelines provides the ability to configure source control repositories, run experiments, group your models, define and access your model endpoints, and configure settings. SageMaker provides example projects that define a pipeline.
SageMaker Pipelines launches CloudFormation stacks and a CodePipeline pipeline under the hood. You can find the status to everything associated with the pipeline in SageMaker Studio.
SageMaker Pipelines handles everything from the data processing, training, model evaluation, model registration, manual approvals, and deploying the model to an endpoint. You can see an example SageMaker pipeline in Figure 9.
Figure 9 – Amazon SageMaker Pipeline Example [Source]
Amazon SageMaker Pipelines is available within SageMaker Studio and at no additional charge. You are charged for any of the resources it provisions such as SageMaker endpoints. For more information, see Amazon SageMaker Pricing.
AWS Fault Injection Simulator
AWS Fault Injection Simulator (FIS) is a fully-managed service for running controlled chaos engineering experiments that can help improve resiliency and performance. As of January 2021, it will be released later in 2021.
Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production [Source]. In 2011, Netflix made an open source tool called Chaos Monkey available for performing these chaos experiments. These types of tools are built on the notion that everything fails all the time so you need to build your systems around these failures to reduce the chance that end users experience them. Products like Gremlin have been providing chaos engineering as a service for years already. The other key part of the counterintuitive chaos engineering approach is that – eventually – you’re running these experiments against production resources to ensure that you’re anticipating all of the possibilities that might occur before they cause an outage or worse.
How it Works
Figure 10 shows the key aspects of AWS FIS.
Figure 10 – AWS Fault Injection Service – How it Works [Source]
To get started with AWS FIS, you define an Experiment Template. In this template you define the fault injection actions, targets, and safeguards to run during the chaos experiment. Examples of fault injection actions include things like stopping an EC2 instance or an RDS database instance. For a full list of currently supported fault injection actions, see Figure 11. Once you have Experiment Template, you can run Experiments. An experiment is actually executing the fault injection action in a controlled manner in your AWS environment.
Figure 11 – Currently Supported Fault Injections of the AWS FIS [Source]
Figure 12 shows an example of using the AWS Console for configuring a stop EC2 instance fault injection. The idea is that you create a controlled experiment to stop an instance while ensuring that you’ve designed your architecture to withstand a fault like this.
Figure 12 – Setting up a Stop EC2 Instance Fault Injection in AWS FIS [Source]
AWS FIS has not been released yet and there’s currently no information available on CloudFormation support.
AWS FIS has not been released yet and there’s currently no information available on pricing.
AWS CloudFormation Modules
AWS CloudFormation Modules are building blocks that can be reused across multiple CloudFormation templates and is used just like a native CloudFormation resource. They’re another way to extend CloudFormation (along with Transforms, Custom Resources, Macros, (CDK – with constructs) and Resource Providers). The difference is that with Modules, you just define a template or templates that adhere to your best practices and then use these modules in your own CloudFormation templates without needing to write Lambda or a Resource Provider to extend CloudFormation. AWS recommends generating modules using the CloudFormation CLI.
Figure 13 shows the key features of CloudFormation Modules along with a code snippet demonstrating how a module is defined.
Figure 13 – AWS CloudFormation Modules Features [Source]
You can find example modules published by AWS here.
There is no additional charge for using CloudFormation Modules. You are only charged for the resources that get created when these Modules are used.
Amazon CodeGuru – Security Detector
Announced at re:Invent 2019, Amazon CodeGuru is a machine learning powered service that helps you automate code reviews and find your most expensive code. There are two aspects to CodeGuru – a code reviewer and a profiler. At re:Invent 2020, AWS announced that CodeGuru Reviewer now provides support for Security Detectors (only on Java code bases for now) which inspects code for things like:
- Hard-coded credentials in API calls
- Outdated cryptographic ciphers
- Insecure handling of untrusted data – such as not sanitizing user-supplied input to protect against cross-site scripting, SQL injection, LDAP injection, path traversal injection, and more
This allows you to incorporate security into code reviews using Amazon CodeGuru Reviewer. CodeGuru Reviewer provides automated comments in pull requests for developers to review and approve. [Source].
Figure 14 shows how to configure Amazon CodeGuru using the Console to get code and security recommendations.
Figure 14 – Amazon CodeGuru Security Detector Setup [Source]
You can see an example of how CodeGuru identifies any source code that does not follow security best practices in Figure 15.
Figure 15 – CodeGuru Flagging a Security Issue in Code Related to Encryption [Source]
You can incorporate code quality and security checks into a continuous integration process in which CodeGuru Security Detector is run against your code when a pull request occurs. This way, developers can fix any security issues before they’re committed to the mainline of the source code repository.
Per AWS, you can perform a full repository analysis or only perform an analysis for every source code pull request made on that repository. For full repository analysis, the first 30,000 lines of code analyzed each month per payer account are free. Otherwise, CodeGuru charges $0.50 per 100 lines of code analyzed for the first 1,500,000 lines of code per month and $0.40 per 100 lines of code analyzed after that limit. For more information, see Amazon CodeGuru Reviewer Pricing.
Amazon ECS Deployment Circuit Breaker
AWS announced the release of the Amazon ECS Deployment Circuit Breaker feature. With an ECS Deployment Circuit Breaker, it will automatically roll back unhealthy service deployments without the need for manual intervention.
Martin Fowler describes a circuit breaker as something in software systems that monitors for a certain number failures and once it reaches a threshold, it trips. It was originally described in Michael Nygard’s Release It! Book.
The new ECS circuit breaker feature runs at the ECS scheduler/orchestrator level and supports EC2 or Fargate-backed tasks. It enables quick discovery of a failed deployment and allows you to automatically roll back to a previously deployed version. Let’s say you have a pipeline that deploys some new changes, the task runs it and it starts to churn. It can take time to register that deployment has failed (for example, CloudFormation might take hours to indicate the deployment failed). The new ECS circuit breaker works with anything that’s running the ECS scheduler so this includes CloudFormation, ECS Console and CLI, Terraform, SDK, CDK, whatever. AWS added some logic into the ECS control plane/orchestrator to determine failures and trip the breaker.
AWS determines when the circuit breaker trips based on the size of your deployment. With this feature, you can reduce or eliminate any similar custom logic you may have implemented prior to the release of this feature.
As part of the AWS::ECS::Service resource a new DeploymentCircuitBreaker property has been added. A code snippet of using the DeploymentCircuitBreaker property for an ECS Service in a CloudFormation template is shown below.
You might have CodePipeline deploy your ECS service from a CloudFormation Deploy Provider. In doing this, it can launch the CloudFormation stack that provisions the ECS service.
There is no additional cost for this feature. As before, you still pay for the cost of compute or related ECS resources.
AWS Network Firewall
AWS Network Firewall is a fully-managed service that enables network security across your Amazon VPCs. It is generally available. Think of it as a firewall that runs on top of all of your VPCs both on ingress and egress from and to the Internet. It helps you provide blanket protections for your entire VPC. It provides high availability and scalability which also means you don’t have to think about patching and managing the underlying instances of the Network Firewall. It works with AWS Firewall Manager to centrally manage firewall rules across existing accounts and VPCs. It also provides real-time firewall activity monitoring via Amazon CloudWatch metrics and comes with a rule engine to create custom rules (IP, port, protocol, domain, and pattern matching).
Figure 16 illustrates how AWS Network Firewall works.
Figure 16 – How AWS Network Firewall Works [Source]
AWS Network Firewall helps protect you across layers 3-6 of the OSI model. It inspects traffic between VPCs, outbound to Internet and inbound from Internet, and between AWS Direct Connect and VPNs.
Figure 17 shows how to use the AWS Network Firewall Console to create a network firewall.
Figure 17 – Creating an AWS Network Firewall using the Console
AWS CloudFormation currently provides support for four AWS Network Firewall resources. They are listed below.
- AWS::NetworkFirewall::Firewall – provide stateful, managed, network firewall and intrusion detection and prevention filtering for your VPCs in Amazon VPC.
- AWS::NetworkFirewall::FirewallPolicy – define the stateless and stateful network traffic filtering behavior for your Network Firewall. You can use one firewall policy for multiple firewalls.
- AWS::NetworkFirewall::LoggingConfiguration – define the destinations and logging options for a Network Firewall.
- AWS::NetworkFirewall::RuleGroup – define a reusable collection of stateless or stateful network traffic filtering rules.
The snippet below shows how you would provision a Network Firewall in CloudFormation [Source].
You might have CodePipeline deploy a CloudFormation stack that provisions an AWS Network Firewall for your region(s) across designated VPCs using a CodePipeline CloudFormation Deploy Provider.
Network Firewall Endpoint $0.395/hr. Network Firewall Traffic Processing $0.065/GB. NAT gateway Pricing. You can use one hour and one GB of NAT gateway at no additional cost for every hour and GB charged for Network Firewall endpoints. For more information, see AWS Network Firewall Pricing.
AWS Service Catalog AppRegistry
AWS Service Catalog released a feature called the AppRegistry. It’s a repository in which you can associate your applications with its related resources. There are many uses for this capability including making it easier to search for resources, classify data, track costs, identify versions, and meet certain compliance certifications. You can use AppRegistry without needing to use Service Catalog. The key benefit you get from AppRegistry is obtaining context between your application and resources. What’s more, you can automate updates of stack and metadata changes by calling AppRegistry from your deployment pipelines when changes occur.
There are five primary steps to setting up the AppRegistry in an enterprise:
- An Administrator configures company-wide shared attribute groups.
- Each development team sets up attribute groups for their team.
- Each development team creates applications in the AppRegistry.
- Each development team associates attribute groups to their applications.
- Each development team associates existing AWS CloudFormation stacks with their applications.
Each of the above capabilities can be performed from the AWS CLI or CloudFormation – see the newly-provided resources in the next section. Here’s an example of associating an existing CloudFormation stack to an application using the AWS CLI.
Figure 18 illustrates that by using AppRegistry, you can associate an application with pipelines, environments, and CloudFormation stacks.
Figure 18 – AppRegistry sample application definition [Source]
There are four new CloudFormation resources to support the launch of AppRegistry. They are:
- AWS::ServiceCatalogAppRegistry::Application – Provision a Service Catalog AppRegistry application which is the top-level node in a hierarchy of related cloud resource abstractions.
- AWS::ServiceCatalogAppRegistry::AttributeGroup – Creates a new attribute group as a container for user-defined attributes. This feature enables users to have full control over their cloud application’s metadata in a rich machine-readable format to facilitate integration with automated workflows and third-party tools.
- AWS::ServiceCatalogAppRegistry::AttributeGroupAssociation – Link Applications and Attribute Groups.
- AWS::ServiceCatalogAppRegistry::ResourceAssociation – Link Resources and Resource Types with Applications.
The snippet below shows how you would provision an AppRegistry Application in CloudFormation.
You can create a deployment pipeline using a service such as CodePipeline with CloudFormation. What’s more, you can deploy CloudFormation stacks from this pipeline. Since AWS Service Catalog provides support for 14 CloudFormation resources, you can deploy Service Catalog related resources from this pipeline. You can also define the AppRegistry resources as part of this pipeline.
AppRegistry uses the same per API call pricing model that Service Catalog uses. Therefore, after 1,000 API calls in a given month, you’re charged $0.0007 per API call (14 calls for 1 cent). For more information, see AWS Service Catalog Pricing.
Here’s a summary of some of the recent CloudFormation changes related to the re:Invent 2020 re:Cap topics that I covered in this post.
- CloudFormation Release History
- Using the CloudFormation Registry
- AWS::ECS::Service DeploymentCircuitBreaker
These 10 services and features accelerate the speed and safety of feedback by ensuring your code is performant and secure, codifying deployment patterns and resources, your resources are more secure and compliant, and that you have the means to mitigate any potential security incidents.
In a fireside chat at the inaugural re:Invent in 2012, the CEO of Amazon.com, Jeff Bezos said that it’s important to focus your business strategy on things that are unlikely to change in the future. In Werner Vogel’s 2018 keynote, he focused on the fact that “everyone wants to just focus on business logic” (see Figure 19). This desire is unlikely to change. Developers want speed and less friction when creating solutions and if you look at the 10 announcements I covered in this post, you see that AWS continues to seek ways to lessen this friction and help developers focus more on business logic and less on undifferentiated heavy lifting.
Figure 19 – (2018 re:Invent keynote) The future is that builders will be able to focus on business logic [Source]
However, providing developers with so many options comes at a risk of increasing friction, initially. So, I expect that AWS will seek to improve the developer experience by helping them navigate their way through all of these options.
Mphasis Stelligent focuses on helping enterprise customers apply DevSecOps practices to accelerate the speed and confidence of delivering software to production. If you’re interested in applying these and other services and features to your solutions, Contact Us.
re:Invent Sessions and Posts
Below, I’ve included the blog posts, re:Invent sessions, and supporting videos related to the topics I covered in this post.
- Preview: AWS Proton – Automated Management for Container and Serverless Deployments
- Incorporating security in code-reviews using Amazon CodeGuru Reviewer
- New – VPC Reachability Analyzer
- Announcing Amazon ECS deployment circuit breaker
- Increase application visibility and governance using AWS Service Catalog AppRegistry
- AWS Network Firewall – New Managed Firewall Service in VPC
- AWS Proton: A first look
- [NEW LAUNCH!] Introducing AWS Audit Manager – Session EMB030
- AWS Proton: Automating infrastructure provisioning & code deployments (Session EMB008)
- EMB031: Improve application availability with ML-powered insights using Amazon DevOps Guru
- [NEW LAUNCH!] How to create fully automated ML workflows with Amazon SageMaker Pipelines
- [NEW LAUNCH!] AWS Fault Injection Simulator: Fully managed chaos engineering service
- What’s new with AWS CloudFormation (covers CloudFormation Modules beginning at 35:37)
- Introducing Amazon SageMaker Pipelines – AWS re:Invent 2020
- AWS on Air 2020: AWS What’s Next ft. Amazon SageMaker Pipelines
- Circuit Breakers for Amazon ECS
- Gaining application-level governance and cost visibility (Covers AWS Service Catalog AppRegistry beginning at 29:00)
- Introducing AWS Network Firewall
Stelligent Amazon Pollycast