Whether it’s in or out of the cloud, most IT compliance usually comes in the form of a multitude of checklists – like the one you see below. It might be a spreadsheet, website, or other “digital” tool but, in the end, it’s still checklists that software teams must comply with by filling out forms and then – ideally – fixing the discovered issues.

In many cases, in order to maintain adherence to internal and external compliance requirements, teams distribute these types of checklists to hundreds of teams and/or conduct audits around release events or randomly in order to ensure compliance. While this doesn’t scale for any organization, it’s particularly difficult to scale a model like this in the cloud where you can provision thousands of resources at the click of a button.

To many, so much of this seems more of a burden in order to satisfy compliance requirements (a form of “risk management theater“) rather than something that truly helps reduce organizational and system risks that protect their end users. It doesn’t have to be this way.

In Automatically Remediate Noncompliant AWS Resources using Lambda, I described how to configure AWS resources in the console to automatically remediate noncompliant resources.

In this post, I take the concept to its logical conclusion by automating the entire continuous compliance and auto remediation solution in versioned code that’s stored in AWS CodeCommit using AWS CloudFormation and AWS CodePipeline along with the other services that detect and fix noncompliant resources including AWS Config Rules, Amazon CloudWatch Event Rules, and AWS Lambda.

Architecture and Implementation

In the figure below, you see the architecture for launching a deployment pipeline that gets source assets from CodeCommit, builds with CodeBuild, and deploys a Lambda function to AWS.

The components of this solution are described in more detail below:

  • Amazon SNS – Provisions a Simple Notification Service (SNS) Topic using the AWS::SNS::Topic resource. The SNS topic is used by the CodeCommit repository for notifications.
  • Amazon S3 – There are several buckets defined in the template using the AWS::S3::Bucket resource. One bucket to store the AWS Config configuration items, another to store Lambda source artifacts, and another bucket to store the CodePipeline artifacts. There’s also an AWS::S3::BucketPolicy resource to define Config artifacts.
  • AWS CloudFormation – All of the resource provisioning of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML. In addition, CloudFormation is used as a deploy action in CodePipeline to deploy a Lambda function using the AWS Serverless Application Model (SAM).
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project resource. CodeBuild is used to run commands that builds the Lambda function using SAM.
  • AWS CodeCommit – Creates a CodeCommit Git repository using the AWS::CodeCommit::Repository resource. The contents for the repository are initialized using a zip file stored in S3.
  • AWS CodePipeline – Creates CodePipeline’s stages and actions in the CloudFormation template which includes using CodeCommit as a source action, CodeBuild as a build action, and CloudFormation for a deploy action (For more information, see CodePipeline concepts)
  • AWS Config – Creates the AWS::Config::ConfigurationRecorder, AWS::Config::DeliveryChannel, and AWS::Config::ConfigRule resources. The Configuration Recorder turns on AWS Config so that it records changes for all available AWS resources. The delivery channel is configured to identify the S3 bucket and SNS Topic to record configuration items and the Config Rule defines which resources will be monitored for compliance to the rules that are important to your organization and/or team.
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline and others resources can access.

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including Config, IAM, and S3. You can find a link to the CloudFormation template at the bottom of this post.

CodeBuild

In the CloudFormation snippet below, you see how I am provisioning a CodeBuild project using the AWS::CodeBuild::Project resource. It refers to the name of the CodeCommit repository and the name of the buildspec file in the CodeCommit repo (buildspec-lambda.yml). The purpose of this resource definition is to provision CodeBuild so that it can run commands in its container to build a Lambda function that automatically remediates a noncomplianct resource (in this case, an S3 Bucket).

  CodeBuildLambdaTrigger:
    Type: AWS::CodeBuild::Project
    DependsOn: CodeBuildRole
    Properties:
      Name:
        Fn::Join:
        - ''
        - - Run
          - CodePipeline
          - Ref: AWS::StackName
      Description: Build application
      ServiceRole:
        Fn::GetAtt:
        - CodeBuildRole
        - Arn
      Artifacts:
        Type: no_artifacts
      Environment:
        EnvironmentVariables:
        - Name: S3_BUCKET
          Value:
            Ref: ArtifactBucket
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_SMALL
        Image: aws/codebuild/eb-nodejs-4.4.6-amazonlinux-64:2.1.3
      Source:
        BuildSpec: buildspec-lambda.yml
        Location:
          Fn::Join:
          - ''
          - - https://git-codecommit.
            - Ref: AWS::Region
            - ".amazonaws.com/v1/repos/"
            - Ref: AWS::StackName
        Type: CODECOMMIT
      TimeoutInMinutes: 10
      Tags:
      - Key: Owner
        Value: MyCodeBuildProject

CodePipeline

In the CloudFormation snippet below, you see how I am provisioning a deployment pipeline using the AWS::CodePipeline::Pipeline resource. The purpose of this resource definition is to provision all the stages and actions for the CodePipeline workflow which, in turn, gets its source files from CodeCommit, and builds and deploys the code run by Lambda.

  Pipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      RoleArn:
        Fn::Join:
        - ''
        - - 'arn:aws:iam::'
          - Ref: AWS::AccountId
          - ":role/"
          - Ref: CodePipelineRole
      Stages:
      - Name: Source
        Actions:
        - InputArtifacts: []
          Name: Source
          ActionTypeId:
            Category: Source
            Owner: AWS
            Version: '1'
            Provider: CodeCommit
          OutputArtifacts:
          - Name: MyApp
          Configuration:
            BranchName:
              Ref: RepositoryBranch
            RepositoryName:
              Ref: AWS::StackName
          RunOrder: 1
      - Name: Build
        Actions:
        - InputArtifacts:
          - Name: MyApp
          Name: BuildLambdaFunctions
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          OutputArtifacts:
          - Name: lambdatrigger-BuildArtifact
          Configuration:
            ProjectName:
              Ref: CodeBuildLambdaTrigger
          RunOrder: 1

CodeCommit

In the CloudFormation snippet below, you see how I am provisioning an AWS::CodeCommit::Repository resource which creates a new CodeCommit repository to store my source files that run the remediation. For ease in remembering, the name of the repository is the same as the CloudFormation stack name. CodeCommitS3Bucket is a CloudFormation parameter that refers to the name of the S3 bucket that you will create to store the source files. CodeCommitS3Key is the S3 key/folder that refers to the name of the zip file you will be creating.

  CodeCommitRepo:
    Type: AWS::CodeCommit::Repository
    Properties:
      RepositoryName:
        Ref: AWS::StackName
      RepositoryDescription: CodeCommit Repository for remediation solution
      Code:
        S3:
          Bucket: CodeCommitS3Bucket
          Key: CodeCommitS3Key
      Triggers:
      - Name: MasterTrigger
        CustomData:
          Ref: AWS::StackName
        DestinationArn:
          Ref: MySNSTopic
        Events:
        - all

CloudFormation

In the CloudFormation snippet below, you see that I am using CloudFormation as a CodePipeline deploy provider. Here are key components of this snippet:

  • The name of the CodePipeline stage is Deploy (it can be any valid CodePipeline name)
  • It takes in lambdatrigger-BuildArtifact as its input artifact (which is the OutputArtifacts of the Build stage)
  • It uses CloudFormation as its Deploy provider to deploy the Lambda function to AWS.
  • CHANGE_SET_REPLACE creates the CloudFormation change set if it doesn’t exist based on the stack name and template that you declare. If the change set exists, AWS CloudFormation deletes it, and then creates a new one.
  • TemplatePath refers to the generated file (template-export.json) from the SAM template that was stored in the OutputArtifacts (lambdatrigger-BuildArtifact) generated in the Build stage of this pipeline. The full reference is lambdatrigger-BuildArtifact::template-export.json.
  • CHANGE_SET_EXECUTE executes a CloudFormation change set.
      - Name: Deploy
        Actions:
        - InputArtifacts:
          - Name: lambdatrigger-BuildArtifact
          Name: GenerateChangeSetLambdaFunction
          ActionTypeId:
            Category: Deploy
            Owner: AWS
            Version: '1'
            Provider: CloudFormation
          OutputArtifacts: []
          Configuration:
            ActionMode: CHANGE_SET_REPLACE
            ChangeSetName: pipeline-changeset
            RoleArn:
              Fn::GetAtt:
              - CloudFormationTrustRole
              - Arn
            Capabilities: CAPABILITY_IAM
            StackName:
              Fn::Join:
              - ''
              - - ''
                - Ref: AWS::StackName
                - "-"
                - Ref: AWS::Region
                - ''
            TemplatePath: lambdatrigger-BuildArtifact::template-export.json
          RunOrder: 1
        - ActionTypeId:
            Category: Deploy
            Owner: AWS
            Provider: CloudFormation
            Version: 1
          Configuration:
            ActionMode: CHANGE_SET_EXECUTE
            ChangeSetName: pipeline-changeset
            StackName:
              Fn::Join:
              - ''
              - - ''
                - Ref: AWS::StackName
                - "-"
                - Ref: AWS::Region
                - ''
          InputArtifacts: []
          Name: ExecuteChangeSetFunction
          OutputArtifacts: []
          RunOrder: 2

Config

In the CloudFormation snippet below, you see how I am provisioning an AWS Config Rule using the AWS::Config::ConfigRule resource. The purpose of this resource definition is to provision the S3_BUCKET_PUBLIC_WRITE_PROHIBITED managed config rule which detects when S3 buckets allow the ability to write to them. This resource definition ensures that the Config DeliveryChannel and ConfigRecorder resources have already been provisioned.

  AWSConfigRule:
    DependsOn:
    - DeliveryChannel
    - ConfigRecorder
    Type: AWS::Config::ConfigRule
    Properties:
      ConfigRuleName:
        Ref: ConfigRuleName
      Description: Checks that your Amazon S3 buckets do not allow public write access.
        The rule checks the Block Public Access settings, the bucket policy, and the
        bucket access control list (ACL).
      InputParameters: {}
      Scope:
        ComplianceResourceTypes:
        - AWS::S3::Bucket
      Source:
        Owner: AWS
        SourceIdentifier: S3_BUCKET_PUBLIC_WRITE_PROHIBITED
      MaximumExecutionFrequency:
        Ref: MaximumExecutionFrequency

Outputs

In the CloudFormation snippet below, you see that I’m defining two Outputs: PipelineUrl and LambdaTrustRole. LambdaTrustRole is used by the SAM template definition so that we can define the IAM role in one location and refer to its output as an input when defining the Lambda function in the SAM template. Therefore, the name you use as the Output in this template must be used in the SAM template when defining the role.

Outputs:
  PipelineUrl:
    Value: https://console.aws.amazon.com/codepipeline/home?region=${AWS::Region}#/view/${Pipeline}
    Description: CodePipeline URL
  LambdaTrustRole:
    Description: IAM role for AWS Lambda used for passRole to Lambda functions.
    Export:
      Name:
        Fn::Join:
        - ''
        - - ''
          - Ref: AWS::StackName
          - "-"
          - Ref: AWS::Region
          - "-LambdaTrustRole"
    Value:
      Fn::GetAtt:
      - MyLambdaTrustRole
      - Arn

SAM Template

The AWS Serverless Application Model (SAM) uses the CloudFormation Transform to specify CloudFormation macros to convert its serverless domain-specific language (DSL) into CloudFormation. You can mix the SAM DSL with CloudFormation resource definitions in the same file.

The SAM components of this solution are described in more detail below:

  • AWS Lambda – There are two resources defined in the SAM template. The first is the AWS::Lambda::Permission and the second is AWS::Serverless::Function. AWS::Serverless::Function also defines a CloudWatch Event which triggers the Lambda function that’s defined as part of its resource definition. “Under the hood”, this calls AWS::Events::Rule to define a CloudWatch Events Rule.

Serverless Function

The contents of the sam-s3-remediation.yml file (which is a SAM template) is listed below. It provisions the Lambda function by defining the name of its handler (which is associated with the name of its file: index.js), the runtime environment and version (Node.js 10.x), and a CloudWatch Event Rule with the pattern that needs to match in order to trigger the event which runs the Lambda function as its target.

  S3F:
    Type: 'AWS::Serverless::Function'
    Properties:
      Handler: index.handler
      Runtime: nodejs10.x
      Events:
        S3CWE:
          Type: CloudWatchEvent
          Properties:
            Pattern:
              source:
                - aws.config
              detail:
                requestParameters:
                  evaluations:
                    complianceType:
                      - NON_COMPLIANT
                additionalEventData:
                  managedRuleIdentifier:
                    - S3_BUCKET_PUBLIC_WRITE_PROHIBITED
      Role:
        'Fn::ImportValue':
          'Fn::Join':
            - '-'
            - - Ref: 'AWS::StackName'
              - LambdaTrustRole

 

Deployment Steps

There are four main steps in launching this solution: preparing an AWS account, create & store source files, launching the CloudFormation stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any fees incurred while creating and launching your solution.

Step 1. Prepare an AWS Account

Here are the prerequisites and assumptions for this solution:

  • AWS Account – Follow these instructions to create an AWS Account: Creating an AWS Account. Be sure you’ve signed up for the CloudFormation service and have the proper permissions.
  • AWS CloudTrail – A CloudTrail log should already be enabled for the region in which you’re running this example. See Creating a Trail for more information.
  • AWS Region – Use the region selector in the navigation bar of the console to choose your preferred AWS region
  • AWS Cloud9 – All of the CLI examples make the assumption you’re using the AWS Cloud9 IDE. To learn how to install and configure Cloud9 by going to Automating AWS Cloud9. You can easily adapt the commands to perform the same actions in a different IDE, but the instructions are outside the scope of this post.
  • AWS Config – This solution assumes you’ve not enabled AWS Config. If you have, you’ll receive an error when launching the CloudFormation stack at it provisions a Config Recorder and Delivery Channel.

Step 2. Create and Store Source Files

In this section, you will create six source files that will be stored in S3 and then uploaded to AWS CodeCommit when launching the CloudFormation stack. The names are listed below:

From your AWS Cloud9 terminal, setup your directory structure (replace REGIONCODE with your AWS region code):

cd ~/environment
aws s3 mb s3://ccoa-lesson0-$(aws sts get-caller-identity --output text --query 'Account') --region REGIONCODE
sudo rm -rf ~/environment/tmp-ccoa
mkdir ~/environment/tmp-ccoa
cd ~/environment/tmp-ccoa
mkdir codecommit
cd ~/environment/tmp-ccoa/codecommit

Create empty source files:

touch buildspec-lambda.yml
touch ccoa-remediation-pipeline.yml
touch index.js
touch package.json
touch README.md
touch sam-s3-remediation.yml

Save the files.

buildspec-lambda.yml

Copy the contents below into the buildspec-lambda.yml and save the file. AWS CodeBuild will use this buildspec to build the Lambda function that runs the automatic compliance remediation to fix the S3 Bucket with a bucket that is too permissive.

CodeBuild runs an aws cloudformation CLI command to package a SAM template and then exports the contents as a zip file into S3 so that Lambda can run the code.

version: 0.2
phases:
  build:
    commands:
      - npm install
      - npm install aws-cli-js
      - >-
        aws cloudformation package --template sam-s3-remediation.yml --s3-bucket
        $S3_BUCKET --output-template template-export.json
artifacts:
  type: zip
  files:
    - template-export.json

ccoa-remediation-pipeline.yml

Copy the source contents from the ccoa-remediation-pipeline.yml  file and save it to your local file in your Cloud9 environment called ccoa-remediation-pipeline.yml. The file is a 500-line CloudFormation template.

index.js

Copy the contents below into the index.js file and save the file. This is the Node.js function that Lambda runs to remove the S3 bucket policy for S3 buckets that allow writes to their buckets.

var AWS = require('aws-sdk');

exports.handler = function(event) {
  console.log("request:", JSON.stringify(event, undefined, 2));

    var s3 = new AWS.S3({apiVersion: '2006-03-01'});
    var resource = event['detail']['requestParameters']['evaluations'];
    console.log("evaluations:", JSON.stringify(resource, null, 2));
    
  
for (var i = 0, len = resource.length; i < len; i++) {
  if (resource[i]["complianceType"] == "NON_COMPLIANT")
  {
      console.log(resource[i]["complianceResourceId"]);
      var params = {
        Bucket: resource[i]["complianceResourceId"]
      };

      s3.deleteBucketPolicy(params, function(err, data) {
        if (err) console.log(err, err.stack); // an error occurred
        else     console.log(data);           // successful response
      });
  }
}


};

package.json

Copy the contents below into the package.json file and save the file. This is a metadata file that all Node.js apps need to operate.

{
  "name": "s3-bucket-public-write-prohibited-app",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"run some tests here\""
  },
  "author": "",
  "license": "ABC"
}

README.md

Copy the contents below into the README.md file and save the file.

# s3-bucket-public-write-prohibited-app

sam-s3-remediation.yml

Copy the contents below into the sam-s3-remediation.yml file and save the file. This is the SAM template that defines the Lambda function and the CloudWatch Event Rule that triggers the Lambda function. The purpose of this CloudWatch Event Rule is to detect noncompliant S3 buckets and then automatically run a Lambda function that remediates these noncompliant buckets.

AWSTemplateFormatVersion: 2010-09-09
Transform:
  - 'AWS::Serverless-2016-10-31'
Resources:
  PermissionForEventsToInvokeLambda:
    Type: 'AWS::Lambda::Permission'
    Properties:
      FunctionName:
        Ref: S3F
      Action: 'lambda:InvokeFunction'
      Principal: events.amazonaws.com
      SourceArn:
        'Fn::GetAtt':
          - S3F
          - Arn
  S3F:
    Type: 'AWS::Serverless::Function'
    Properties:
      Handler: index.handler
      Runtime: nodejs10.x
      Events:
        S3CWE:
          Type: CloudWatchEvent
          Properties:
            Pattern:
              source:
                - aws.config
              detail:
                requestParameters:
                  evaluations:
                    complianceType:
                      - NON_COMPLIANT
                additionalEventData:
                  managedRuleIdentifier:
                    - S3_BUCKET_PUBLIC_WRITE_PROHIBITED
      Role:
        'Fn::ImportValue':
          'Fn::Join':
            - '-'
            - - Ref: 'AWS::StackName'
              - LambdaTrustRole

Sync the files with your S3 bucket

In this section, you will zip and upload all of the source files to S3 so that they can be committed to the CodeCommit repository that is automatically provisioned by the stack generated by the ccoa-remediation-pipeline.yml template.

NOTE: Make a note of the S3 bucket you create as you will be using this is a parameter when launching your CloudFormation stack. 

From your AWS Cloud9 environment, type the following:

cd ~/environment/tmp-ccoa/codecommit
zip ccoa-lesson0-examples.zip *.*
aws s3 sync ~/environment/tmp-ccoa/codecommit s3://ccoa-lesson0-$(aws sts get-caller-identity --output text --query 'Account')

Step 3. Launch the Stack

From your AWS Cloud9 environment, type the following (replacing EMAILADDRESS@example.com and REGIONCODE with the appropriate values):


aws cloudformation create-stack --stack-name ccoa-rem --template-body file:///home/ec2-user/environment/tmp-ccoa/codecommit/ccoa-remediation-pipeline.yml --parameters ParameterKey=EmailAddress,ParameterValue=EMAILADDRESS@example.com ParameterKey=CodeCommitS3Bucket,ParameterValue=ccoa-lesson0-$(aws sts get-caller-identity --output text --query 'Account') ParameterKey=CodeCommitS3Key,ParameterValue=ccoa-lesson0-examples.zip --capabilities CAPABILITY_NAMED_IAM --disable-rollback --region REGIONCODE

Step 4. Test the Deployment

First, verify that the CloudFormation stack you just launched (called ccoa-rem) was successfully created. Click on the PipelineUrl Output to launch deployment pipeline in CodePipeline to see it running. Verify that the pipeline successfully went through all stages (as shown below).

Next, you’ll create an S3 bucket that allows people to put files into the bucket. We’re doing this for demonstration purposes since you should not grant any kind of public write access to your S3 bucket. Here are the steps:

  1. Go to the S3 console
  2. Click the Create bucket button
  3. Enter ccoa-s3-write-violation-ACCOUNTID in the Bucket name field (replacing ACCOUNTID with your account id)
  4. Click Next on the Configure Options screen
  5. Unselect the Block all public access checkbox and click Next on the Set Permissions screen (see the figure below)
  6. Click Create bucket on the Review screen
  7. Select the ccoa-s3-write-violation-ACCOUNTID bucket and choose the Permissions tab
  8. Click on Bucket Policy and paste the contents from below into the Bucket policy editor text area (replace both mybucketname values with the ccoa-s3-write-violation-ACCOUNTID bucket you just created)
  9. Click the Save button

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:Abort*",
        "s3:DeleteObject",
        "s3:GetBucket*",
        "s3:GetObject",
        "s3:List*",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::mybucketname",
        "arn:aws:s3:::mybucketname/*"
      ]
    }
  ]
}

You’ll receive this message: You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket.

View Config Rules

In this section, you’ll verify that the Config Rule has been triggered and that the S3 bucket resource has been automatically remediated:

  1. Go to the Config console
  2. Click on Rules (your s3-bucket-public-write-prohibited rule should be noncompliant as shown below)
  3. Select the s3-bucket-public-write-prohibited
  4. Click the Re-evaluate button
  5. Go back Rules in the Config console
  6. Go to the S3 console and choose the ccoa-s3-write-violation-ACCOUNTID bucket that the bucket policy has been removed.
  7. Go back Rules in the Config console and confirm that the s3-bucket-public-write-prohibited rule is Compliant

What’s Next?

In this post, you learned how to setup a robust automated compliance and remediation infrastructure for noncompliant AWS resources using services such as S3, AWS Config & Config Rules, Amazon CloudWatch Event Rules, AWS Lambda, IAM, and others. You did this by automating all of the provisioning using tools like AWS CloudFormation, AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild.

By leveraging this approach, your AWS infrastructure is capable of rapidly scaling resources while ensuring these resources are always in compliance without humans needing to manually intervene.

Consider the possibilities when adding hundreds if not thousands of rules and remediations to your AWS infrastructure. Below is just an example of some of the different types of Managed Config Rules you can run. What if you took each of these and developed custom remediations for them and ensured they were running across all of your AWS accounts? Or, what if you wrote your own Config Rules and triggered CloudWatch Events to execute remediations you developed in Lambda? This way your compliance remains in lockstep with the rest of your AWS infrastructure.

As a result, engineers can focus their efforts on automating the prevention, detection, and remediation of their AWS infrastructure rather than manually hunting down every noncompliant resource, creating a ticket, and manually fixing the issue. This is Continuous Compliance – at scale!

Sample Code

The code for the examples demonstrated in this post are located here. Let us know if you have any comments or questions @mphasis or @stelligent.

Stelligent Amazon Pollycast
Voiced by Amazon Polly