Many enterprises attempt to drive software development and delivery towards a DevOps mindset. Likewise, organizations struggle with increasing security challenges while adopting these innovative software practices.

Embedding security within the deployment lifecycle is non-negotiable. Therefore, integration of security into CI/CD workflows need be done cautiously to meet an ever-evolving technology landscape.

DevSecOps is an important concept that provides an automated approach for integrating security into the software delivery lifecycle. Some challenges exist with adding security controls into containerized solutions.

Furthermore, considering open source containers, many contain known and unknown vulnerabilities. Most organizations struggle to ascertain, with a high level of confidence, how secure they are. These tools introduce additional functionality and beneficial features that results in faster solution implementation. In contrast, many are not compliant with organizational security goals. The reality is, failure to design secure deployment pipelines can cost a company millions.

This blog post provides a deployment guide on how to create, launch and implement a standalone container scanning solution – Anchore Engine within the AWS ecosystem. It describes how to set up a release pipeline for the automated vulnerability scanning of container images.

The approach implemented within this blog uses AWS ECS as a deployment option among others. Visit anchore-quickstart for more details on setup options.

Solution Overview

This approach uses an open-source container scanning tool called Anchore Engine as a proof-of-concept and provides examples of how Anchore integrates with your favorite CI/CD systems orchestration platforms.

Anchore is a container compliance platform that ensures the security and stability of container deployments. It utilizes a data-driven approach to analyze and conduct policy enforcement to achieve container static analysis and policy-based compliance.

This tool automates the inspection, analysis, and evaluation of images against user-defined checks. Therefore, it delivers a high confidence in container deployments by ensuring workload content meets the required criteria. Each scan process provides a policy evaluation result for each image: pass/fail against custom defined policies by the user.

For a more detailed understanding of concepts and overview on Anchore Engine, visit anchore overview.

Architecture

Here’s how to install Anchore Engine on AWS. The diagram below shows the high-level architecture of Anchore Engine. Anchore Engine is deployed as an AWS ECS service with the EC2 launch type in this use-case deployed behind an Application Load Balancer.

The anchore-engine and anchore-database containers are deployed into private subnets behind the Application Load Balancer that is hosted in public subnets. The private subnets must have a route to the internet using the NAT gateway; as Anchore Engine fetches the latest vulnerabilities from publicly available online sources.

Anchore Engine overall architecture comprises of a 3-Tier services and features: API, State and Worker Tiers. The Engine API is the primary API for the entire system, Engine State Management handles the catalog, simple-queue and policy-engine services, and Engine Workers do all the image download and analysis heavy-lifting. The Anchore Engine Database provides an intermediate storage medium between each service using a single PostgreSQL database with a default public schema namespace.

 

 

Getting Started

This application comprises of several steps that must be executed as specified. Before running any commands review the prerequisites section to ensure you have required packages and installed needed software.

Prerequisites

Ensure that the following are installed or configured on your workstation before deploying Anchore Engine:

  • Docker
  • Git
  • AWS CLI
  • Make
  • Github Personal Token (Stored within AWS SSM as a parameter)

Installation

Clone this Github repository

Configure and setup your AWS CLI.

Setup production environment

Build your deployment environment with docker.

This will build your local dockerized image for deploying and launching Anchore-Engine. It installs various packages as defined within the Dockerfile and python packages listed within the requirements.pip file. Using Docker ensures the environment requirements needed to successfully deploy this application persist across different environments.

Setup test environment

Build a testing environment within docker by running:

This testing image provides a local environment to run all your local testing and helps with launching a quick development environment for troubleshooting. It installs additional python packages as stipulated within the requirements-test.pip file.

Deployments

This deployment comprises of the various AWS resources:

  1. Amazon Elastic Container Registry (ECR) Repository
  2. Amazon VPC
    • Two public subnets
    • Two private subnets
    • NAT gateways to allow internet access for services in private subnets
    • Internet Gateway
    • Security Groups
  3. Amazon Application Load Balancer
    • Load Balancer
    • Listeners
    • Target Groups
  4. Amazon EC2
    • AutoScaling Group
    • CloudWatch
    • AWS IAM
  5. Amazon Elastic Container Service
    • Cluster
    • Services
    • Task Definitions
  6. AWS CodePipeline

The application launches Anchore-Engine on AWS and sets up AWS CodePipeline for automatic image vulnerability scan and detection. It will deploy resources in the following order to achieve this:

Build and push Anchore-Engine Docker Image

First, create an Amazon Elastic Container Registry repository to host your Anchore Engine Docker image. Then, build the anchore-engine image on your workstation and push it to the ECR repository.

Run the following make command:

This command utilizes the app_image.py python module along with a configuration template as specified in the YAML snippet below to create a CloudFormation template. The module launches an AWS ECR CloudFormation stack that creates an ECR registry for anchore-engine docker image. [ Note: A sample Staging ECR repository is defined within this template for demonstration purposes only. Target a separate repository for staging your scanned and tested images ]. In addition, this module runs the push_image.sh script that uses a Dockerfile to build a local image of an anchore-engine, tags the image and pushes it to the above mentioned ECR registry.

Deploy Anchore-Engine Server

To launch your Anchore-Engine server, ensure that the anchore-engine image has been built and pushed to AWS ECR before deploying Anchore-Engine AWS ECS service. 

Run this make command:

Note: Access launched EC2 instance through AWS System Manager for troubleshooting purposes.

The above command utilizes the index.py python module as the entrypoint to create CloudFormation templates using a troposphere template generator. These AWS resources will be created: VPC, ALB, EC2, and ECS using Python SDK boto3 API calls.

Each of these stack parameters are extracted from accompanying YAML configuration templates within the configs folder. These YAML templates provide each CloudFormation stack’s parameters at the point of deployment as shown below.

Launch a sample pipeline to integrate Anchore-Engine scanning with AWS CodePipeline

Deploying your pipeline to scan either publicly available images or private registry images can be achieved by configuring your client environment with anchore-cli client. For detailed information on installation, setup and CLI commands, visit anchore-cli github repository.

Follow examples available within the examples folder for quick implementation using AWS CodePipeline with CodeBuild project as a stage within your pipeline. The content of this directory can be copied and saved within your application source control into a repository targeted by AWS CodePipeline as the source stage.

Run the following command to launch a sample pipeline to test Anchore-Engine functionality:

This command utilizes pipeline.py python module to launch a CloudFormation stack using the template pipeline.yml with a configuration YAML template that defines CloudFormation parameters. Modify and update the provided configuration template in examples/aws-codepipeline/pipeline_configs.yml with information for your target application and repository using the snippet example below.

This stack contains a CodeBuild job as a Test Stage which executes a set of defined commands within a CodeBuild environment using a buildspec.yml as shown below. This defines a collection of build commands and related settings for automatic pulling and building of your application image, scanning for CVEs and issuing a PASS/FAIL status based on scan results. Only PASSED images are tagged and pushed to a staging repository.

The sample test Stage is categorized into 5 steps following CodeBuild Spec Syntax convention.

Environment variables

Defines custom environment variables to be used and called during the entire build process.

Install runtime packages and anchore CLI client

This section comprises of different phases of the build process.

The install phase installs python3.7 packages used to install awscli and anchorecli plus a docker18 runtime packages within the build environment.

  • Initializes the Docker daemon for Ubuntu base image
  • Updates apt-get
  • Installs pip, awscli, boto3, and Anchore CLI – anchorecli
Configure ECR repository and anchore-cli user
  • Log in to ECR

Configure Anchore CLI – anchore-cli using username, password and URL for the server (URL is pre-assigned as environment variables within AWS CodeBuild stage during pipeline stack creation)

Scan images using Anchore-Engine user-defined policy

This is where the actual scanning and testing is executed with a condition for pass/fail based on each scan result.

Checkout anchore ecr configuration to register and target AWS ECR as a private registry.

  • Check running Anchore-Engine version
  • Check Anchore-Engine system status for client-server communication
  • Add user-defined policy to Anchore-Engine by targeting path to JSON formatted policy document. This policy document contains policies, whitelists, mappings, whitelisted images and blacklisted images to run analysis against
    Note: Find a sample custom policy here. For more anchore-cli policy commands, visit anchore policies for more anchore-cli image commands.
  • Add your target application image to Anchore-Engine by pulling the image from a public repository
    Note: You can add images that owned in a private registry with a specified custom Dockerfile for analyses. Visit this link for more anchore-cli image commands.
  • Wait for analysis completion
  • Return a list of vulnerabilities found in the container image during analysis
  • Evaluate analysis result for scan status using custom policy. This check for the result of the scan activities and render a PASS or FAIL to the condition to fail the build or proceed.
  • Build image locally
Stage tested images
  • Tag built image
  • Push scanned and tested image to staging or production repository

Run the following commands to deploy all above mentioned resources needed for Anchore Engine. This command combines all three above mentioned deployment and launches all resources with a click of a single command. All requirements must be met as stated in requirements.

Clean-Up

Run the following commands to teardown all deployed resources within AWS.

Summary

In conclusion, enterprises can scan containers for security flaws with enterprise-specific defined policies using Anchore-Engine within Amazon Web Services (AWS). This blog post demonstrated how to implement automated container image security scans as part of a CI/CD pipeline to protect against common vulnerabilities and exposures (CVE). Above all, this Anchore-Engine solution can serve as a Centralized Docker image vulnerability scanner and integrated with other CI/CD tools i.e AWS CodeBuild projects, Jenkins. Certainly, to meet your organization’s security requirements and compliance goals, define your user-defined vulnerability policy in Anchore as instructed in anchore user-defined policy using anchore-cli policy commands.

The code for the examples demonstrated in this post are located on GitHub here.

References

Stelligent Amazon Pollycast
Voiced by Amazon Polly