Blog

DevOps in AWS Radio: Automating Compliance with AWS Config and Lambda (Episode 3)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and speak about Automating Compliance using AWS Config, Config Rules and AWS Lambda. Here are the show notes:

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

ChefConf 2016

Highlights and Announcements

ChefConf 2016 was held in Austin, TX last week, and as usual, Stelligent sent a few engineers to take advantage of the learning and networking opportunities. This year’s conference was a 4 day event with a wide variety of workshops, lectures, keynote speeches, and social events. Here is a summary of the highlights and announcements.

Chef Automate

Chef Automate was one of the big announcements at ChefConf 2016. It is billed as “One platform that delivers DevOps workflow, automated compliance, and end-to-end pipeline visibility.” It brings together Chef for infrastructure automation, InSpec for compliance automation, and Habitat for application automation, and delivers a full-stack deployment pipeline along with customizable dashboards for comprehensive visibility. Chef Compliance and Chef Delivery have been phased out as stand-alone products and replaced by Chef Automate as the company’s flagship commercial offering.

Chef_Automate_–_Embrace_DevOps___Chef

Habitat

Habitat was actually announced back in June, but it was a big focus of ChefConf 2016. There were five info sessions and a Habitat Zone for networking with and learning from other community members. Habitat is an open source project that focuses on application automation and provides a packaging system that results in apps that are “immutable and atomically deployed, with self-organizing peer relationships.” Here are the key features listed on the project website:

  • Habitat is unapologetically app-centric. It’s designed with best practices for the modern application in mind.
  • Habitat gives you a packaging format and a runtime supervisor with deployment coordination and service discovery built in.
  • Habitat packages contain everything the app needs to run with no outside dependencies. They are isolated, immutable, and auditable.
  • The Habitat supervisor knows the packaged app’s peer relationships, upgrade strategy, and policies for restart and security. The supervisor is also responsible for runtime configuration and connecting to management services, such as monitoring.

Habitat packages have the following attributes:

habitat

Chef Certification

A new Chef Certification program was also announced at the conference. It is a badge-based program where passing an exam for a particular competency earns you a badge. A certification is achieved by earning all the required badges for that learning track. The program is in an early adopter phase and not all badges are available yet. Here’s what those tracks look like right now:

Chef Certified Developer

ccd

 

Chef Certified Windows Developer

ccwd

 

Chef Certified Architect

cca

 

Join Us

Do you love Chef? Do you love AWS? Do you love automating software development workflows to create CI/CD pipelines? If you answered “Yes!” to any of these questions then you should come work at Stelligent. Check out our Careers page to learn more.

Cross-Account Access Control with Amazon STS for DynamoDB

In this post, we’ll be talking about creating cross-account access for DynamoDB. DynamoDB is a NoSQL Database in the cloud provided by Amazon Web Services.

Whether you’re creating a production deployment pipeline that leverages a shared Keystore or deploying an application in multiple accounts with shared resources, you may find yourself wondering how to provide access to your AWS resources from multiple AWS accounts.

Keystore is an open source pipeline secret management tool from Stelligent that is backed by DynamoDB and encrypted with Amazon’s Key Management System. Check it out on Github.

Although we will focus on DynamoDB, the concepts discussed in this post are not necessarily limited to DynamoDB and have many other uses for a variety of AWS services where multi-account access control is desired.

DynamoDB does not provide any built-in access control; however, it does provide an interface to fine-grained access control for users. If you’re looking to provide access to DynamoDB from a web app, mobile app or federated user, check out the documentation in AWS to get started with AWS’ Identity and Access Management (IAM).

This post will focus on leveraging IAM Roles and AWS’ Security Token Service (STS) to provide the more advanced access control to our DynamoDB tables.

In our use-case, we needed to provide a second account access to our DynamoDB tables for a Keystore implementation. The goal was to provide this second account access to our secrets without duplicating the data or the storage costs.

The plan is to leverage the features of IAM and STS to provide the access control, this works by creating two roles:

  • The role created on Account A will provide access to DynamoDB and KMS, and allow Account B to assume it.
  • The role created on Account B will provide access to STS’ AssumeRole action against our role in Account A. Any host or user with this role will be able to acquire temporary API credentials from STS for Account A.

For more information on how this works under the hood, check out the AWS Documentation on Cross-Account Access Delegation.

As a security best practice, you’ll want to ensure the access provided is as specific as possible. You should limit access to specific actions, DynamoDB tables and keys in KMS.

When creating resources in your account, it’s always a good idea to use a configuration management tool and for our examples we will be using CloudFormation to configure and provision the IAM and STS resources.

Step 1: Create a Role in Account A

  • Allow STS to assume it from Account B
  • Attach a policy to allow access to DynamoDB and KMS

cloudformation-launch-stack

Click here to review the CloudFormation template in the button above.

Step 2: Create a Role in Account B

  • Allow STS AssumeRole from Amazon EC2
  • Allow access to only your Account A assumable ARN
    • You’ll need the ARN from Step 1

cloudformation-launch-stack

Click here to review the CloudFormation template in the button above.

Step 3: Try it out!

  • You can use the AWS CLI to retrieve a temporary set of credentials from STS to use for access to Account A’s DynamoDB!

Our very own Jeff Bachtel has adapted a snippet for acquiring and implementing temporary STS credentials into your shell. Here it is:

iam-assume-role.sh

#!/bin/bash -e
#
# Adapted from https://gist.github.com/ambakshi/ba0fe456bb6da24da7c2
#
# Clear out existing AWS session environment, or the awscli call will fail
unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_SECURITY_TOKEN

ROLE_ARN="${1:-arn:aws:iam::123456789:role/AccountARole}"
DURATION="${2:-900}"
NAME="${3:-$LOGNAME@`hostname -s`}"

# KST=access*K*ey, *S*ecretkey, session*T*oken
KST=(`aws sts assume-role --role-arn "${ROLE_ARN}" \
                          --role-session-name "${NAME}" \
                          --duration-seconds ${DURATION} \
                          --query '[Credentials.AccessKeyId,Credentials.SecretAccessKey,Credentials.SessionToken]' \
                          --output text`)

echo 'export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION:-us-east-1}'
echo "export AWS_ACCESS_KEY_ID='${KST[0]}'"
echo "export AWS_SECRET_ACCESS_KEY='${KST[1]}'"
echo "export AWS_SESSION_TOKEN='${KST[2]}'"      # older var seems to work the same way
echo "export AWS_SECURITY_TOKEN='${KST[2]}'"

From an EC2 instance launched with the role created in Step 2, we can use this script to test our cross-account access.

$ eval $(./iam-assume-role.sh arn:aws:iam::123456789:role/AccountARole)

When you’re ready to go back to your own Instance Profile credentials, you can unset the temporary token:

unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_SECURITY_TOKEN

Wrapping Up

As you can see, using the power of IAM and STS to bridge the gap between two accounts to share resources is quite easy and secure. There’s tons of possibilities here that go beyond that of DynamoDB and KMS to allow you to reduce costs and technical debt.

Automate CodePipeline Manual Approvals in CloudFormation

pipeline_manual_approvals_onestage.jpgRecently, AWS announced that it added manual approval actions to AWS CodePipeline. In doing so, you can now model your entire software delivery process – whether it’s entirely manual or a hybrid of automated and manual approval actions.

 

In this post, I describe how you can add manual approvals to an existing pipeline – manually or via CloudFormation – to minimize your CodePipeline costs.

Pricing

The AWS CodePipeline pricing model is structured to incentivize two things:

  • Frequent Code Commits
  • Long-lived Pipelines

This is because AWS charges $1 per active pipeline per month. Therefore, if you were to treat these pipelines as ephemeral, you’d likely be paying more than you might be otherwise consuming. While in experimentation mode, you might be regularly launching and terminating pipelines as you determine the appropriate stages and actions for an application/service, once you’ve established this pipeline, the change lifecycle is likely to be much less.

Since CodePipeline uses compute resources, AWS had to make a decision on whether they incentivize frequent code commits or treat pipelines ephemerally – as they do with other resources like EC2. If they’d chosen to charge by the frequency activity then it could result in paying more when committing more code – which would be a very bad thing since you want developers to be committing code many times a day.

Immutability

While we tend to prefer an immutable approach in most things when it comes to the infrastructure, the fact is that different parts of your system will change at varying frequencies. This is the case with your pipelines. Once your pipelines have been established, typically, you might make add, edit, or remove some stages and actions but probably not every day.

Our “workaround” is to use CloudFormation’s update capability to modify our pipeline’s stages and actions without incurring the additional $1 that we’d get charged if we were to launch a new active pipeline.

The best way to apply these changes is to make the minimum required changes in the template so that errors are prevalent if they do occur.

Manual Approvals

There are many reasons your software delivery workflow might require manual approvals including exploratory testing, visual inspection, change advisory boards, code reviews, etc.

Some other reasons for manual approvals include canary and blue/green deployments – where you might make final deployment decisions once some user or deployment testing is complete.

With manual approvals in CodePipeline, you can now make the approval process a part of a fully automated software delivery process.

Create and Connect to a CodeCommit Repository

Follow these instructions for creating and connecting to an AWS CodeCommit repository: Create and Connect to an AWS CodeCommit Repository. Take note of the repository name as you’ll be using it as a CloudFormation user parameter later. The default that I use the lab is called codecommit-demo but you can modify this CloudFormation parameter.

Launch a Pipeline

Click the button below to launch a CloudFormation stack that provisions AWS CodePipeline with some default Lambda Invoke actions.

Once the CloudFormation has launched successfully, click on the link next to the PipelineUrl Output from your CloudFormation stack. This launches your pipeline. You should see a pipeline similar to the one in the figure below.

pipeline_before_update

Update a Pipeline

To update your pipeline, click on the Edit button at the top of the pipeline in CodePipeline. Then, click the (+) Stage link in between the Staging the Production stage. Enter the name ExploratoryTesting for the stage name, then click the (+) Action link. The add action window displays. Choose the new Approval Action category from the drop down and enter the other required and optional fields, as appropriate. Finally, click the Add action button.

codepipeline_manual_approvals_pipeline_edit

Once you’ve done this, click on the Release change button. Once it goes through the pipeline stages and actions, it transitions to the Exploratory Testing stage where your pipeline should look similar to the figure below.

pipeline_before_after

At this time, if your SNS Topic registered with the pipeline is linked to an email address, you’ll receive an email message that looks similar to the one below.

codepipeline_manual_approvals

As you can see, you can click on the link to be brought to the same pipeline where you can approve or reject the “stage”.

Applying Changes in CloudFormation

You can apply the same updates to CodePipeline that you had previously manually performed in code using CloudFormation update-stack. We recommend you minimize the incremental number of changes you apply using CloudFormation so that they are specific to CodePipeline changes. This is because limiting your change sets often results in limiting the amount of time you spend troubleshooting any problems.

Once you’ve manually added the new manual approval stage and action, you can use your AWS CLI to get the JSON configuration that you can use in your CloudFormation update template. To do this, run the following command substituting {YOURPIPELINENAME} with the name of your pipeline.

aws codepipeline get-pipeline --name {YOURPIPELINENAME} >pipeline.json

You’ll also notice that this command pipes the output to a file that you can use as a means of copying and formatting as part of stage and action configuration in CodePipeline. For example, the difference between the initial pipeline and the updated pipeline is shown in the JSON configuration below.

          {
            "Name":"ExploratoryTesting",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"QA",
                "ActionTypeId":{
                  "Category":"Approval",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"Manual"
                },
                "OutputArtifacts":[

                ],
                "Configuration":{
                  "NotificationArn":{
                    "Fn::Join":[
                      "",
                      [
                        "arn:aws:sns:",
                        {
                          "Ref":"AWS::Region"
                        },
                        ":",
                        {
                          "Ref":"AWS::AccountId"
                        },
                        ":",
                        {
                          "Ref":"SNSTopic"
                        }
                      ]
                    ]
                  },
                  "CustomData":"Approval or Reject this change after running Exploratory Tests"
                },
                "RunOrder":1
              }
            ]
          },

You can take this code and add it to a new CloudFormation template so that it’s between the Staging and Production stages. Once you’ve done this, go back to your command line and run the update-stack command from your AWS CLI. An example is shown below. You’ll replace the {CFNSTACKNAME} with your stack name. If you want to make additional changes to the new stack, you can download the CloudFormation template and update it to an S3 location you control.

aws cloudformation update-stack --stack-name {CFNSTACKNAME} --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/codepipeline-updates-after.json --region us-east-1 --capabilities="CAPABILITY_IAM" --parameters ParameterKey=RepositoryBranch,UsePreviousValue=true ParameterKey=RepositoryName,UsePreviousValue=true ParameterKey=S3BucketLambdaFunction,UsePreviousValue=true ParameterKey=SNSTopic,UsePreviousValue=true

By running this command against the initial stack, you’ll see the same updates that you’d manually defined previously. The difference is that it’s defined in code which means you can version, test and deploy changes.

An alternative approach is to manually apply the changes using Update Stack through from your CloudFormation stack. You’ll enter the new CloudFormation template as an input and CloudFormation will determine which changes it will apply to your infrastructure. You see a screenshot of the change that CloudFormation will apply below.

codepipeline_preview_changes.jpg

Summary

By incorporating manual approvals into your software delivery process, you can fully automate its workflow. You learned how you can apply changes to your pipeline using CloudFormation as a way of minimizing your costs while providing a repeatable, reliable update process through code.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/codepipeline. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

References

Acknowledgements

My colleagues at Stelligent including Eric Kascic and Casey Lee provided some use cases for manual approvals.

 

Beyond Continuous Deployment

If I were to whittle the principle behind all modern software development approaches into one word, that word would be: feedback. By “modern approaches”, I’m referring to DevOps, Continuous Integration (CI), Continuous Delivery (CD), Continuous Deployment, Microservices, and so on. For definitions of these terms, see the Stelligent Glossary.

It’s not just feedback: it’s fast and effective feedback. It’s incorporating that feedback into subsequent behavior. It’s amplifying this feedback back into the development process to affect future work as soon as possible.

In this post, I describe how we can move beyond continuous deployment by focusing on the principle of feedback.

Feedback

I think it’s important to define all of this as one word and one principle because it’s so easy to get lost in the weeds of tools, processes, patterns, and practices. When I’m presented with a new concept, a problem, or an approach in my work, I often ask myself: “How will this affect feedback? Will it increase fast, effective feedback or decrease it?” These questions often help guide my decision making.

Amazon Web Services (AWS) has a great talk on how Amazon embraced DevOps in their organization (well before it was called “DevOps”). As part of the talk, they show an illustration similar to the one you see below in which they describe the feedback loop between developers and customers.

deployment_pipeline_feedback
Deployment Pipeline Feedback Loop (Based on: https://aws.amazon.com/devops/what-is-devops/)

They go on to describe two key points with this feedback loop:

  1. The faster you’re able to get through the feedback loop determines your customer responsiveness and your ability to innovate.
  2. In the eyes of your customers, you’re only delivering value when you’re spending time on the left side – developing new features.

The key, as AWS describes, is that any time you spend on building the pipeline itself or hand-holding changes through this pipeline, you’re not delivering value – at least in the eyes of your customer. So, you want to maximize the time you’re spending on the left side (developing new features) and minimize the time you’re spending in the middle – while delivering high-quality software that meets the needs of your customers.

They go on to describe DevOps as anything that helps increase these feedback loops, which might include changes to the organization, process, tooling or culture.

The Vision

There’s a vision on feedback that I’ve discussed with a few people and only recently realized that I hadn’t shared with the wider software community. In many ways, I still feel like it’s “Day 1” when it comes to software delivery. As mentioned, there’s been the introduction of some awesome tools, approaches, and practices in the past few years like Cloud, Continuous Delivery, Microservices, and Serverless but we’ll be considering all of this ancient times several years from now.

Martin Fowler is fond of saying that software integration should be a “non event”. I wholeheartedly agree but the reality is that even in the best CI/CD environments, there are still lots of events in the form of interruptions and wait time associated with the less creative side of software development (i.e. the delivery of the software to users).

The vision I describe below is inspired by an event on Continuous Integration that I attended in 2008 that Andy Glover describes on The Disco Blog. I’m still working on the precise language of this vision, but by focusing on fast and effective feedback, it led me to what I describe here:

Beyond Continuous Deployment

Using smart algorithms, code is automatically integrated and pushed to production in nanoseconds as a developer continues to work as long as it passes myriad validation and verification checks in the deployment pipeline. The developer is notified of success or failure in nanoseconds passively through their work environment.

I’m sure there are some physics majors who might not share my “nanoseconds” view on this, but sometimes approaching problems devoid of present day limitations can lead to better future outcomes. Of course, I don’t think people will complain if it’s “seconds” instead of “nanoseconds” as it moves closer toward the vision.

This vision goes well beyond the idea of today’s notion of “Continuous Deployment” which relies on developers to commit code to a version-control repository according to the individual developer’s idiosyncrasies and schedule. In this case, smart algorithms would determine when and how often code is “committed” to a version-control repository and, moreover, these same algorithms are responsible for orchestrating it into the pipeline on its way to production.

These smart algorithms could be applied when a code block is complete or some other logical heuristic. It’d likely use some type of machine learning algorithm to determine these logical intervals, but it might equate to hundreds of what we call “commits” per developer, per day. As a developer, you’d continue writing code as these smart algorithms automatically determine the best time to integrate your changes with the rest of the code base. In your work environment, you might see some passive indicators of success or failure as you continue writing code (e.g. color changes and/or other passive notifiers). The difference is that your work environment is informing you not just based on some simple compilation, but based upon the full creation and verification of the system as part of a pipeline – resulting in an ultra fast and effective feedback cycle.

The “developer’s work environment” that I describe in the vision could be anything from what we think of as an Integrated Development Environment (IDE) to a code editor, to a developer’s environment managed in the cloud. It doesn’t really matter because the pipeline runs based on the canonical source repository as defined and integrated through the smart algorithms and orchestrated through a canonical pipeline.

Some deployment pipelines today effectively use parallel actions to increase the throughput of the system change. But, even in the most effective pipelines, there’s still a somewhat linear process in which some set of actions relies upon preceding actions to succeed in order to initiate its downstream action(s). The pipelines that would enable this vision would need to rethink the approach to parallelization in order to provide feedback as fast I’m suggesting.

This approach will also likely require more granular microservices architectures as a means of decreasing the time it takes to provide fast and effective feedback.

In this vision, you’d continue to separate releases from deployments whereas deployments will regularly occur in this cycle, but releases would be associated with more business-related concerns. For example, you might have thousands of deployments for a single application/service in a week, but maybe only a single release during that same time.

If a deployment were to fail, it wouldn’t deploy the failure to production. It only deploys to production if it passes all of the defined checks that are partially built from machine learning and other autonomic system techniques.

Summary

By focusing on the principle of feedback, you can eliminate a lot of the “noise” when it comes to making effective decisions on behalf of your customers and your teams. Your teams need fast and effective feedback to be more responsive to customers. I shared how you can often arise at better decisions by focusing on principles over practices. Finally, this vision goes well beyond today’s notion of Continuous Deployment to enable even more effective engineer and customer responsiveness.

Resources

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

DevOps in AWS Radio: Serverless Delivery with Casey Lee (Episode 2)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and speak with Casey Lee about his three-part series on Serverless Delivery:

 

About DevOps in AWS Radio

On DevOps in AWS Radio, we’ll be covering topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Automating Penetration Testing in a CI/CD Pipeline (Part 3)

Continuous Security: Security in the Continuous Delivery Pipeline is a series of articles addressing security concerns and testing in the Continuous Delivery pipeline. This is the eighth article in the series.

Introduction

In this third and final post we’ll cover fully integrating the work we did in the first two posts into a CI/CD pipeline for maximum efficiency. In the first post, we discussed what OWASP ZAP is, how it’s installed and automating that installation process with Ansible. The second post followed up with how to script our penetration tests and manage the results. Strap in for this final article where everything now comes together (cue fanfare).

Wrapping Pen Tests in a CI/CD Pipeline

If you recall the diagram from the first post, we will now be taking each of these steps and wrapping them up into our pipeline. We have the necessary scripts to run the tests and manage the results. What’s left is managing a ZAP server and fetching the necessary information to run our penetration testing against.

ZAP-Basic-CI_CD-Flow - New Page (1)

ZAP will need to be running on a server or a server must be spun up during the penetration testing phase of your pipeline. If using AWS, an AMI can exist that will make provisioning and destroying a ZAP instance extremely easy. For this we’ll be using Packer. We’ll briefly discuss how to deploy the Packer-based AMI provided in the Stelligent Zap repository, though a detailed look into Packer is a bit beyond the scope of this post.

Start by editing the ‘zap-ami-packer.json’ file to match your environment. This includes configuring the AWS region, EC2 key pair, EC2 instance type, and source AMI image, among others. (Please note, Amazon Linux was used in these examples; changing the distribution may require changes to the Ansible playbook.)

Assuming Packer is installed, building the image is as simple as calling the supplied script ‘create-image.sh’. This does require your AWS credentials set in your environment before calling the script.

export AWS_ACCESS_KEY_ID=<your_aws_access_key>
export AWS_SECRET_KEY=<your_aws_secret_key>
./create-image.sh

This will dump an AMI image ID which could be called from your automation scripts to build an EC2 instance from.

To bring this all together in an effective manner we need to have the penetration testing script triggered as a step in our CI/CD pipeline. To do this the script must know where the ZAP server resides, where the target application is, report results in an easily accessible manner and trigger Jenkins to report a correct ‘pass’ or ‘fail’ status for the job itself.

To achieve this we’ll wrap everything up in a fairly simplistic BASH script that will simply be called from Jenkins.

To start, we must first determine the ZAP host and target host/url for the application to be penetration tested. This will be specific to the environment. If running in AWS, the EC2 or CloudFormation resources could be queried, or if the environment is pre-set this can be passed to the workspace in variables.

Whatever your chosen method for discovering these servers, they’ll need to be passed to the ‘pen-test-app.py’ as follows:

python pen-test-app.py --zap-host <ZAP_HOST:PORT> --target <TARGET_URL>

If using the Packer method above to build your AMI, an EC2 Instance could be launched to host ZAP.

INSTANCE_ID=$(aws ec2 run-instances --image-id <AMI_ID_from_Packer>) --count 1 --instance-type t1.micro --key-name MyKeyPair --security-groups my-sg sed -n 's/.*InstanceId": "\(.*\)".*/\1/p')
ZAP_HOST=$(aws ec2 describe-instances --instance-ids ${INSTANCE_ID} --query 'Reservations[*].Instances[0].NetworkInterfaces[0].PrivateIpAddresses[0].PrivateIpAddress' --output text)
python pen-test-app.py --zap-host ${ZAP_HOST} --target 
aws ec2 terminate-instances --instance-ids ${INSTANCE_ID}

After the script has run it will create ‘results.json’ which will then be read by Behave. Behave will return a failed exit status if any of the tests fail. This will in turn trigger the Jenkin’s job to return failure and the pipeline to stop. The developer must now login into Jenkins, access the job and view the console output to determine what has failed. An alternative is to allow some kind of reporting handle the output of Behave, so that the reports can be easily accessed by the developer. To prevent Jenkins from reading the failed exit code from Behave and capture the output we want to run something like this:

behave_results=$(behave > behave_results.txt; echo "$?")

What this does is write the behave results to ‘behave_results.txt’ and captures the exit code to the ‘behave_results’ variable. Now we can run commands to manage the output before exiting with the ‘behave_results’ status. In the example below we can simply upload the reports to S3:

behave_results=$(behave > behave_results.txt; echo "$?")
aws s3 cp behave_results s3://our-application-pipeline/reports/pen-test/
exit ${behave_results}

This will upload the resulting report to the s3 bucket and then exit with the Behave exit code triggering the Jenkins job to ‘pass’ or ‘fail’ accordingly.

Lastly, in place of a simple S3 upload, a more complicated reporting script can be put in place that can capture additional data such as Jenkins’ build information and perhaps format it for json or yaml to be consumed upstream. To make life easier, Behave can be told to dump its output as json. Replace the behave line above with:

behave_results=$(behave --no-summary --format json.pretty > behave_results.json; echo "$?")

It’s also worth noting that an AWS Lambda function can be created to watch for changes in the S3 bucket. The Lambda function can create a pretty HTML report that it pushes back up to a website enabled S3 bucket for viewing.

Summary

This tutorial scratches the surface of what OWASP ZAP is capable of when integrated into a full CI/CD pipeline. As this post attempted to prove, it’s not too difficult to implement automated penetration testing into your own CI/CD pipelines. While it is not meant to fully replace penetration testing of your software, it does aid in the tedious portions of testing and provide fast results, allowing developers to quickly attend to any issues.

Stelligent is hiring! Do you enjoy working on complex problems like security in the CD pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Automating and Orchestrating OpsWorks in CloudFormation and CodePipeline

pipeline_opsworks_consoleIn this post, you’ll learn how to provision, configure, and orchestrate a PHP application using the AWS OpsWorks application management service into a deployment pipeline using AWS CodePipeline that’s capable of deploying new infrastructure and code changes when developers commit changes to the AWS CodeCommit version-control repository. This way, team members can release new changes to users whenever they choose to do so: aka, Continuous Delivery.

Recently, AWS announced the integration of OpsWorks into AWS CodePipeline so I’ll be describing various components and services that support this solution including CodePipeline along with codifying the entire infrastructure in AWS CloudFormation. As part of the announcement, AWS provided a step-by-step tutorial of integrating OpsWorks with CodePipeline that I used as a reference in automating the entire infrastructure and workflow.

This post describes how to automate all the steps using CloudFormation so that you can click on a Launch Stack button to instantiate all of your infrastructure resources.

OpsWorks

“AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application’s architecture and the specification of each component including package installation, software configuration and resources such as storage. Start from templates for common technologies like application servers and databases or build your own to perform any task that can be scripted. AWS OpsWorks includes automation to scale your application based on time or load and dynamic configuration to orchestrate changes as your environment scales.” [1]

OpsWorks provides a structured way to automate the operations of your AWS infrastructure and deployments with lifecycle events and the Chef configuration management tool. OpsWorks provides more flexibility than Elastic Beanstalk and more structure and constraints than CloudFormation. There are several key constructs that compose OpsWorks. They are:

  • Stack – An OpsWorks stack is the logical container defining OpsWorks layers, instances, apps and deployments.
  • Layer – There are built-in layers provided by OpsWorks such as Static Web Servers, Rails, Node.js, etc. But, you can also define your own custom layers as well.
  • Instances – These are EC2 instances on which the OpsWorks agent has been installed. There are only certain Linux and Windows operating systems supported by OpsWorks instances.
  • App – “Each application is represented by an app, which specifies the application type and contains the information that is needed to deploy the application from the repository to your instances.” [2]
  • Deployment – Runs Chef recipes to deploy the application onto instances based on the defined layer in the stack.

There are also lifecycle events that get executed for each deployment. Lifecycle events are linked to one or more Chef recipes. The five lifecycle events are setup, configure, deploy, undeploy, shutdown. Events get triggered based upon certain conditions. Some events can be triggered multiple times. They are described in more detail below:

  • setup – When an instance finishes booting as part of the initial setup
  • configure – When this event is run, it executes on all instances in all layers whenever a new instance comes in service, or an EIP changes, or an ELB is attached
  • deploy – When running a deployment on an instance, this event is run
  • undeploy – When an app gets deleted, this event is run
  • shutdown – Before an instance is terminated, this event is run

Solution Architecture and Components

In Figure 2, you see the deployment pipeline and infrastructure architecture for the OpsWorks/CodePipeline integration.

opsworks_pipeline_arch.jpg
Figure 2 – Deployment Pipeline Architecture for OpsWorks

Both OpsWorks and CodePipeline are defined in a single CloudFormation stack, which is described in more detail later in this post. Here are the key services and tools that make up the solution:

  • OpsWorks – In this stack, code configures operations of your infrastructure using lifecycle events and Chef
  • CodePipeline – Orchestrate all actions in your software delivery process. In this solution, I provision a CodePipeline pipeline with two stages and one action per stage in CloudFormation
  • CloudFormation – Automates the provisioning of all AWS resources. In this solution, I’m using CloudFormation to automate the provisioning for OpsWorks, CodePipeline,  IAM, and S3
  • CodeCommit – A Git repo used to host the sample application code from this solution
  • PHP – In this solution, I leverage AWS’ OpsWorks sample application written in PHP.
  • IAM – The CloudFormation stack defines an IAM Instance Profile and Roles for controlled access to AWS resources
  • EC2 – A single compute instance is launched as part of the configuration of the OpsWorks stack
  • S3 – Hosts the deployment artifacts used by CodePipeline.

Create and Connect to a CodeCommit Repository

While you can store your software code in any version-control repository, in this solution, I’ll be using the AWS CodeCommit Git repository. I’ll be integrating CodeCommit with CodePipeline. I’m basing the code off of the Amazon OpsWorks PHP Simple Demo App located at https://github.com/awslabs/opsworks-demo-php-simple-app.

To create your own CodeCommit repo, follow these instructions: Create and Connect to an AWS CodeCommit Repository. I called my CodeCommit repository opsworks-php-demo. You can call it the same but if you do name it something different, be sure to replace the samples with your repo name.

After you create your CodeCommit repo, copy the contents from the AWS PHP OpsWorks Demo app and commit all of the files.

Implementation

I created this sample solution by stitching together several available resources including the CloudFormation template provided by the Step-by-Step Tutorial from AWS on integrating OpsWorks with CodePipeline and existing templates we use at Stelligent for CodePipeline. Finally, I manually created the pipeline in CodePipeline using the same step-by-step tutorial and then obtained the configuration of the pipeline using the get-pipeline command as shown in the command snippet below.

aws codepipeline get-pipeline --name OpsWorksPipeline > pipeline.json

This section describes the various resources of the CloudFormation solution in greater detail including IAM Instance Profiles and Roles, the OpsWorks resources, and CodePipeline.

Security Group

Here, you see the CloudFormation definition for the security group that the OpsWorks instance uses. The definition restricts the ingress port to 80 so that only web traffic is accepted on the instance.

    "CPOpsDeploySecGroup":{
      "Type":"AWS::EC2::SecurityGroup",
      "Properties":{
        "GroupDescription":"Lets you manage OpsWorks instances deployed to by CodePipeline"
      }
    },
    "CPOpsDeploySecGroupIngressHTTP":{
      "Type":"AWS::EC2::SecurityGroupIngress",
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"80",
        "ToPort":"80",
        "CidrIp":"0.0.0.0/0",
        "GroupId":{
          "Fn::GetAtt":[
            "CPOpsDeploySecGroup",
            "GroupId"
          ]
        }
      }
    },

IAM Role

Here, you see the CloudFormation definition for the OpsWorks instance role. In the same CloudFormation template, there’s a definition for an IAM service role and an instance profile. The instance profile refers to OpsWorksInstanceRole defined in the snippet below.

The roles, policies and profiles restrict the service and resources to the essential permissions it needs to perform its functions.

    "OpsWorksInstanceRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  {
                    "Fn::FindInMap":[
                      "Region2Principal",
                      {
                        "Ref":"AWS::Region"
                      },
                      "EC2Principal"
                    ]
                  }
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"s3-get",
            "PolicyDocument":{
              "Version":"2012-10-17",
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "s3:GetObject"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

Stack

The snippet below shows the CloudFormation definition for the OpsWorks Stack. It makes references to the IAM service role and instance profile, using Chef 11.10 for its configuration, and using Amazon Linux 2016.03 for its operating system. This stack is used as the basis for defining the layer, app, instance, and deployment that are described later in this section.

    "MyStack":{
      "Type":"AWS::OpsWorks::Stack",
      "Properties":{
        "Name":{
          "Ref":"AWS::StackName"
        },
        "ServiceRoleArn":{
          "Fn::GetAtt":[
            "OpsWorksServiceRole",
            "Arn"
          ]
        },
        "ConfigurationManager":{
          "Name":"Chef",
          "Version":"11.10"
        },
        "DefaultOs":"Amazon Linux 2016.03",
        "DefaultInstanceProfileArn":{
          "Fn::GetAtt":[
            "OpsWorksInstanceProfile",
            "Arn"
          ]
        }
      }
    },

Layer

The OpsWorks PHP layer is described in the CloudFormation definition below. It references the OpsWorks stack that was previously created in the same template. It also uses the php-app layer type. For a list of valid types, see CreateLayer in the AWS API documentation. This resource also enables auto healing, assigns public IPs and references the previously-created security group.

    "MyLayer":{
      "Type":"AWS::OpsWorks::Layer",
      "Properties":{
        "StackId":{
          "Ref":"MyStack"
        },
        "Name":"MyLayer",
        "Type":"php-app",
        "Shortname":"mylayer",
        "EnableAutoHealing":"true",
        "AutoAssignElasticIps":"false",
        "AutoAssignPublicIps":"true",
        "CustomSecurityGroupIds":[
          {
            "Fn::GetAtt":[
              "CPOpsDeploySecGroup",
              "GroupId"
            ]
          }
        ]
      },
      "DependsOn":[
        "MyStack",
        "CPOpsDeploySecGroup"
      ]
    },

OpsWorks Instance

In the snippet below, you see the CloudFormation definition for the OpsWorks instance. It references the OpsWorks layer and stack that are created in the same template. It defines the instance type as c3.large and refers to the EC2 Key Pair that you will provide as an input parameter when launching the stack.

    "MyInstance":{
      "Type":"AWS::OpsWorks::Instance",
      "Properties":{
        "LayerIds":[
          {
            "Ref":"MyLayer"
          }
        ],
        "StackId":{
          "Ref":"MyStack"
        },
        "InstanceType":"c3.large",
        "SshKeyName":{
          "Ref":"KeyName"
        }
      }
    },

OpsWorks App

In the snippet below, you see the CloudFormation definition for the OpsWorks app. It refers to the previously created OpsWorks stack and uses the current stack name for the app name – making it unique. In the OpsWorks type, I’m using php. For other supported types, see CreateApp.

I’m using other for the AppSource type (OpsWorks doesn’t seem to make the documentation obvious in terms of the types that AppSource supports, so I resorted to using the OpsWorks console to determine the possibilities). I’m using other because my source type is CodeCommit, which isn’t currently an option in OpsWorks.

    "MyOpsWorksApp":{
      "Type":"AWS::OpsWorks::App",
      "Properties":{
        "StackId":{
          "Ref":"MyStack"
        },
        "Type":"php",
        "Shortname":"phptestapp",
        "Name":{
          "Ref":"AWS::StackName"
        },
        "AppSource":{
          "Type":"other"
        }
      }
    },

CodePipeline

In the snippet below, you see the CodePipeline definition for the Deploy stage and the DeployPHPApp action in CloudFormation. It takes MyApp as an Input Artifact – which is an Output Artifact of the Source stage and action that obtains code assets from CodeCommit.

The action uses a Deploy category and OpsWorks as the Provider. It takes four inputs for the configuration: StackId, AppId, DeploymentType, LayerId. With the exception of DeploymentType, these values are obtained as references from previously created AWS resources in this CloudFormation template.

For more information, see CodePipeline Concepts.

         {
            "Name":"Deploy",
            "Actions":[
              {
                "InputArtifacts":[
                  {
                    "Name":"MyApp"
                  }
                ],
                "Name":"DeployPHPApp",
                "ActionTypeId":{
                  "Category":"Deploy",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"OpsWorks"
                },
                "OutputArtifacts":[

                ],
                "Configuration":{
                  "StackId":{
                    "Ref":"MyStack"
                  },
                  "AppId":{
                    "Ref":"MyOpsWorksApp"
                  },
                  "DeploymentType":"deploy_app",
                  "LayerId":{
                    "Ref":"MyLayer"
                  }
                },
                "RunOrder":1
              }
            ]
          }

Launch the Stack

Click the button below to launch a CloudFormation stack that provisions the OpsWorks environment including all the resources previously described such as CodePipeline, OpsWorks, IAM Roles, etc.

When launching a stack, you’ll enter a value the KeyName parameter from the drop down. Optionally, you can enter values for your CodeCommit repository name and branch if they are different than the default values.

opsworks_pipeline_cfn
Figure 3- Parameters for Launching the CloudFormation Stack

You will charged for your AWS usage – particularly EC2, CodePipeline and S3.

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):

aws cloudformation create-stack --stack-name OpsWorksPipelineStack --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/codepipeline-opsworks.json --region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" --parameters  ParameterKey=KeyName,ParameterValue=YOURKEYNAME

Outputs

Once the CloudFormation stack successfully launches, there’s an output for the CodePipelineURL. You can click on this value to launch the pipeline that’s running that’s getting the source assets from CodeCommit and launch an OpsWorks stack and associated resources. See the screenshot below.

cfn_opsworks_pipeline_outputs
Figure 4 – CloudFormation Outputs for CodePipeline/OpsWorks stack

Once the pipeline is complete, you can access the OpsWorks stack and click on the Public IP link for one of the instances to launch the PHP application that was deployed using OpsWorks as shown in Figures 5 and 6 below.

opsworks_public_ip.jpg
Figure 5 – Public IP for the OpsWorks instance

 

opsworks_app_before.jpg
Figure 6 – OpsWorks PHP app once initially deployed

Commit Changes to CodeCommit

Make some visual changes to the code (e.g. your local CodeCommit version of index.php) and commit these changes to your CodeCommit repository to see these software get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to rust orange"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser – as shown in Figure 7.

opsworks_app_after.jpg
Figure 7 – Application after code changes committed to CodeCommit, orchestrated by CodePipeline and deployed by OpsWorks

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/opsworks. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Useful Resources and References

OpsWorks Reference

Below, I’ve documented some additional information that might be useful on the OpsWorks service itself including its available integrations, supported versions and features.

  • OpsWorks supports three application source types: GitHub, S3, and HTTP.
  • You can store up to five versions of an OpsWorks application: the current revision plus four more for rollbacks.
  • When using the create-deployment method, you can target the OpsWorks stack, app, or instance
  • OpsWorks require internet access for the OpsWorks endpoint instance
  • Chef supports Windows in version 12
  • You cannot mix Windows and Linux instances in an OpsWorks stack
  • To change the default OS in OpsWorks, you need to change the OS and reprovision the instances
  • You cannot change the VPC for an OpsWorks instance
  • You can add ELB, EIPs, Volumes and RDS to an OpsWorks stack
  • OpsWorks autoheals at the layer level
  • You can assign multiple Chef recipes to an OpsWorks layer event
  • The three instance types in OpsWorks are: 24/7, time-based, load-based
  • To initiate a rollback in OpsWorks, you use create-deployment command
  • The following commands are available when using OpsWorks create-deployment along with possible use cases:
    • install_dependencies
    • update_dependencies – Patches to the Operating System. Not available after Chef 12.
    • update_custom_cookbooks – pulling down changes in your Chef cookbooks
    • execute_recipes – manually run specific Chef recipes that are defined in your layers
    • configure – service discovery or whenever endpoints change
    • setup
    • deploy
    • rollback
    • start
    • stop
    • restart
    • undeploy
  • To enable the use of multiple custom cookbook repositories in OpsWorks, you can enable custom cookbook at the stack and then create a cookbook that has a Berkshelf file with multiple sources. Before Chef 11.10, you couldn’t use multiple cookbook repositories.
  • You can define Chef databags in OpsWorks Users, Stacks, Layers, Apps and Instances
  • OpsWorks Auto Healing is triggered when an OpsWorks Agent detects loss of communication and stops, then restarts the instances. If it fails, it goes into manual intervention
  • OpsWorks will not auto heal an upgrade to the OS
  • OpsWorks does not auto heal by monitoring performance, only failures.

Acknowledgements

My colleague Casey Lee provided some of the background information on OpsWorks features. I also used several resources from AWS including the PHP sample app and the step-by-step tutorial on the OpsWorks/CodePipeline integration.

 

 

 

DevOps in AWS Radio: AWS CodeCommit and CodePipeline using CloudFormation (Episode 1)

In this episode, Paul Duvall and Brian Jakovich from Stelligent cover recent DevOps in AWS news and do a deep dive into automating the integration of AWS CodeCommit and CodePipeline using CloudFormation.

Finally, they bring you into a Stelligent roundtable to discuss recent DevOps in AWS engagements with customers.

About DevOps in AWS Radio

On DevOps in AWS Radio, we’ll be covering topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

 

 

Automating ECS: Orchestrating in CodePipeline and CloudFormation (Part 2)

In my first post on automating the EC2 Container Service (ECS), I described how I automated the provisioning of ECS in AWS CloudFormation using its JSON-based DSL.

In this second and last part of the series, I will demonstrate how to create a deployment pipeline in AWS CodePipeline to deploy changes to ECS Docker images in the EC2 Container Registry (ECR).

In doing this, you’ll not only see how to automate the creation of the infrastructure but also automate the deployment of the application and its infrastructure via Docker containers. This way you can commit infrastructure, application and deployment changes as code to your version-control repository and have these changes automatically deployed to production or production-like environments.

The benefit is the customer responsiveness this embodies: you can deploy new features or fixes to users in minutes, not days or weeks.

Pipeline Architecture

In the figure below, you see the high-level architecture for the deployment pipeline

 

Deployment Pipeline Architecture
Deployment Pipeline Architecture for ECS

With the exception of the CodeCommit repository creation, most of the architecture is implemented in a CloudFormation template. Some of this is the result of not requiring a traditional configuration management tool to perform configuration on compute instances.

CodePipeline is a Continuous Delivery service that enables you to orchestrate every step of your software delivery process in a workflow that consists of a series of stages and actions. These actions perform the steps of your software delivery process.

In CodePipeline, I’ve defined two stages: Source and Build. The Source stage retrieves code artifacts via a CodeCommit repository whenever someone commits a new change. This initiates the pipeline. CodePipeline is integrated with the Jenkins Continuous Integration server. The Build stage updates the ECS Docker image (which runs a small PHP web application) within ECR and makes the new application available through an ELB endpoint.

Jenkins is installed and configured on an Amazon EC2 instance within an Amazon Virtual Private Cloud (VPC). The CloudFormation template runs commands to install and configure the Jenkins server, install and configure Docker, install and configure the CodePipeline plugin and configure the job that’s run as part of the CodePipeline build action. The Jenkins job is configured to run a bash script that’s committed to the CodeCommit repository. This bash script updates the ECS service and task definition by running a Docker build, tag and push to the ECR repository. I describe the implementation of this architecture in more detail in this post.

Jenkins

In this example, CodePipeline manages the orchestration of the software delivery workflow. Since CodePipeline doesn’t actually execute the actions, you need to integrate it with an execution platform. To perform the execution of the actions, I’m using the Jenkins Continuous Integration server. I’ll configure a CodePipeline plugin for Jenkins so that Jenkins executes certain CodePipeline actions.

In particular, I have an action to update an ECS service. I do this by running a CloudFormation update on the stack. CloudFormation looks for any differences in the templates and applies those changes to the existing stack.

To orchestrate and execute this CloudFormation update, I configure a CodePipeline custom action that calls a Jenkins job. In this Jenkins job, I call a shell script passing several arguments.

Provision Jenkins in CloudFormation

In the CloudFormation template, I create an EC2 instance on which I will install and configure the Jenkins server. This CloudFormation script is based on the CodePipeline starter kit.

To launch a Jenkins server in CloudFormation, you will use the AWS::EC2::Instance resource. Before doing this, you’ll be creating an IAM role and an EC2 security group to the already provisioned VPC (the VPC provisioning is part of the CloudFormation script).

Within the Metadata attribute of the resource (i.e. the EC2 instance on which Jenkins will run), you use the AWS::CloudFormation::Init resource to define the user data configuration. To apply your changes, you call cfn-init to run commands on the EC2 instance like this:

"/opt/aws/bin/cfn-init -v -s ",

Then, you can install and configure Docker:

"# Install Docker\n",
"cd /tmp/\n",
"yum install -y docker\n",

On this same instance, you will install and configure the Jenkins server:

"# Install Jenkins\n",
...
"yum install -y jenkins-1.658-1.1\n",
"service jenkins start\n",

And, apply the dynamic Jenkins configuration for the job so that it updates the CloudFormation stack based on arguments passed to the shell script.

"/bin/sed -i \"s/MY_STACK/",
{
"Ref":"AWS::StackName"
},
"/g\" /tmp/config-template.xml\n",

In the config-template.xml, I added tokens that get replaced as part of the commands run from the CloudFormation template. You can see a snippet of this below in which the command for the Jenkins job makes a call to the configure-ecs.sh bash script with some tokenized parameters.

<command>bash ./configure-ecs.sh MY_STACK MY_ACCTID MY_ECR</command>

All of the commands for installing and configuring the Jenkins Server, Docker, the CodePipeline plugin and Jenkins jobs are described in the CloudFormation template that is hosted in the version-control repository.

Jenkins Job Configuration Template

In the previous code snippets from CloudFormation, you see that I’m using sed to update a file called  config-template.xml. This is a Jenkins job configuration file for which I’m updating some token variables with dynamic information that gets passed to it from CloudFormation. This information is used to run a bash script to update the CloudFormation stack – which is described in the next section.

ECS Service Script to Update CloudFormation Stack

The code snippet below shows how the bash script captures that arguments that are passed by the Jenkins job into bash variables. Later in the script, it uses these bash variables to make a call the update-stack command in the CloudFormation API to apply a new ECS Docker image to the endpoint.

MY_STACK=$1
MY_ACCTID=$2
MY_ECR=$3

uuid=$(date +%s)
awsacctid="$MY_ACCTID"
ecr_repo="$MY_ECR"
ecs_stack_name="$MY_STACK"
ecs_template_url="$MY_URL"

In the code snippet below of the configure-ecs.sh script, I’m building, tagging and pushing to the Docker repository in my EC2 Container Registry repository using the dynamic values passed to this script from Jenkins (which were initially passed from the parameters and resources of my CloudFormation script).

In doing this, it creates a new Docker image for each commit and tags it with a unique id based on date and time. Finally, it uses the AWS CLI to call the update-stack command of the CloudFormation API using the variable information.

eval $(aws --region us-east-1 ecr get-login)

# Build, Tag and Deploy Docker
docker build -t $ecr_repo:$uuid .
docker tag $ecr_repo:$uuid $awsacctid.dkr.ecr.us-east-1.amazonaws.com/$ecr_repo:$uuid
docker push $awsacctid.dkr.ecr.us-east-1.amazonaws.com/$ecr_repo:$uuid

aws cloudformation update-stack --stack-name $ecs_stack_name \ 
--template-url $ecs_template_url --region us-east-1 \
--capabilities="CAPABILITY_IAM" --parameters \ 
ParameterKey=AppName,UsePreviousValue=true \
ParameterKey=ECSRepoName,UsePreviousValue=true \ ParameterKey=DesiredCapacity,UsePreviousValue=true \ ParameterKey=KeyName,UsePreviousValue=true \ ParameterKey=RepositoryBranch,UsePreviousValue=true \ ParameterKey=RepositoryName,UsePreviousValue=true \ ParameterKey=InstanceType,UsePreviousValue=true \ ParameterKey=MaxSize,UsePreviousValue=true \ ParameterKey=S3ArtifactBucket,UsePreviousValue=true \ ParameterKey=S3ArtifactObject,UsePreviousValue=true \ ParameterKey=SSHLocation,UsePreviousValue=true \ ParameterKey=YourIP,UsePreviousValue=true \ ParameterKey=ImageTag,ParameterValue=$uuid

Now that you see the basics of install and configuring Jenkins in CloudFormation and what happens when the Jenkins is run through the CodePipeline orchestration, let’s look at the steps for configuring the CodePipeline part of the CodePipeline/Jenkins configuration.

Create a Pipeline using AWS CodePipeline

Before I create a working pipeline, I prefer to model the stages and actions in CodePipeline using Lambda so that I can think through the workflow. To do this I refer to my blog post on Mocking AWS CodePipeline pipelines with Lambda. I’m going to create a two-stage pipeline consisting of a Source and a Build stage. These stages and the actions in these stages are described in more detail below.

Define a Custom Action

There are five types of action categories in CodePipeline: Source, Build, Deploy, Invoke and Test. Each action has four attributes: category, owner, provider and version. There are codepipeline_ecsthree types of action owners: AWS, ThirdParty and Custom. AWS refers to built-in actions provided by AWS. Currently, there are four built-in action providers from AWS: S3, CodeCommit, CodeDeploy and ElasticBeanstalk. Examples of ThirdParty action providers include RunScope and GitHub. If none of the action providers suit your needs, you can define custom actions in CodePipeline. In my case, I wanted to run a script from a Jenkins job so I used the CloudFormation sample configuration from the CodePipeline starter kit for the configuration of the custom build action that I use to integrate Jenkins with CodePipeline. See the snippet below.

    "CustomJenkinsActionType":{
      "Type":"AWS::CodePipeline::CustomActionType",
      "DependsOn":"JenkinsHostWaitCondition",
      "Properties":{
        "Category":"Build",
        "Provider":{
          "Fn::Join":[
            "",
            [
              {
                "Ref":"AppName"
              },
              "-Jenkins"
            ]
          ]
        },
        "Version":"1",
        "ConfigurationProperties":[
          {
            "Key":"true",
            "Name":"ProjectName",
            "Queryable":"true",
            "Required":"true",
            "Secret":"false",
            "Type":"String"
          }
        ],
        "InputArtifactDetails":{
          "MaximumCount":5,
          "MinimumCount":0
        },
        "OutputArtifactDetails":{
          "MaximumCount":5,
          "MinimumCount":0
        },
        "Settings":{
          "EntityUrlTemplate":{
            "Fn::Join":[
              "",
              [
                "http://",
                {
                  "Fn::GetAtt":[
                    "JenkinsServer",
                    "PublicIp"
                  ]
                },
                "/job/{Config:ProjectName}"
              ]
            ]
          },
          "ExecutionUrlTemplate":{
            "Fn::Join":[
              "",
              [
                "http://",
                {
                  "Fn::GetAtt":[
                    "JenkinsServer",
                    "PublicIp"
                  ]
                },
                "/job/{Config:ProjectName}/{ExternalExecutionId}"
              ]
            ]
          }
        }
      }
    },

The example pipeline that I’ve defined in CodePipeline (and described as code in CloudFormation) uses the above custom action in the Build stage of the pipeline, which is described in more detail in the Build Stage section later.

Source Stage

The Source stage has a single action to look for any changes to a CodeCommit repository. If it discovers any new commits, it retrieves the the artifacts from the CodeCommit and stores them in an encrypted form in an S3 bucket. If it’s successful, it transitions to the next stage: Build. A snippet from the CodePipeline resource definition for the Source stage in CloudFormation is shown below.

        "Stages":[
          {
            "Name":"Source",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"Source",
                "ActionTypeId":{
                  "Category":"Source",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"CodeCommit"
                },
                "OutputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-SourceArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "Configuration":{
                  "BranchName":{
                    "Ref":"RepositoryBranch"
                  },
                  "RepositoryName":{
                    "Ref":"RepositoryName"
                  }
                },
                "RunOrder":1
              }
            ]
          },

Build Stage

The Build stage invokes actions to create a new ECS repository if one doesn’t exist, builds and tags a Docker image and makes a call to a CloudFormation template to launch the rest of the ECS environment – including creating an ECS cluster, task definition, ECS services, ELB, Security Groups and IAM resources. It does this using the custom CodePipeline action for Jenkins that I described earlier. A snippet from the CodePipeline resource definition in CloudFormation for the Build stage is shown below.

          {
            "Name":"Build",
            "Actions":[
              {
                "Name":"DeployPHPApp",
                "InputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-SourceArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "ActionTypeId":{
                  "Category":"Build",
                  "Owner":"Custom",
                  "Version":"1",
                  "Provider":{
                    "Fn::Join":[
                      "",
                      [
                        {
                          "Ref":"AWS::StackName"
                        },
                        "-Jenkins"
                      ]
                    ]
                  }
                },
                "OutputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-BuiltArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "Configuration":{
                  "ProjectName":{
                    "Ref":"AWS::StackName"
                  }
                },
                "RunOrder":1
              }
            ]
          }

The custom action for Jenkins (via the CodePipeline plugin) is looking for work from CodePipeline. When it finds work, it performs the task associated with the CodePipeline action. In this case, it runs the Jenkins job that calls the configure-ecs.sh script. This bash script makes a update-stack call to the original CloudFormation template passing in the new image via the ImageTag parameter which is the new tag generated for the Docker image created as part of this script.

CloudFormation seeks to run the minimum necessary changes to the infrastructure based on the stack update. In this case, I’m only providing a new image tag but this results in creating a new ECS task definition for the service. In your CloudFormation events console, you’ll see a message similar to the one below:

AWS::ECS::TaskDefinition Requested update requires the creation of a new physical resource; hence creating one.

As I mentioned in part 1 of this series, I defined a DeploymentConfiguration type with a MinimumHealthyPercent property of 0 since I’m only using one EC2 instance as running through the earlier stages of the pipeline. This means the application experiences a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I’d increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.

Other Stages

In the example I provided, I stop at the Build stage. If you were to take this to production, you might include other stages as well. Perhaps you might have a “Staging” stage in which you might include actions to deploy the application to the ECS containers using a production-like configuration which might include more instances in the Auto Scaling Group.

Once Staging is complete, the pipeline would automatically transition to the Production stage where it might make Lambda calls to test the application running in ECS containers. If everything looks ok, it switches the Route 53 hosted zone endpoint to the new container.

Launch the ECS Stack and Pipeline

In this section, you’ll launch the CloudFormation stack that creates the ECS and Pipeline resources.

Prerequisites

You need to have already created an ECR repository and a CodeCommit repository to successfully launch this stack. For instructions on creating an ECR repository, see part 1 of this series (or to directly launch the CloudFormation stack to create this ECR repository, click this button: .) For creating a CodeCommit repository, you can either see part 1 or use the instructions described at: Create and Connect to an AWS CodeCommit Repository.

Launch the Stack

Click the button below to launch a CloudFormation stack that provisions the ECS environment including all the resources previously described such as CodePipeline, ECS Cluster, ECS Task Definition, ECS Service, ELB, VPC resources, IAM Roles, etc.

You’ll enter values for the following parameters: RepositoryNameYourIPKeyName, and ECSRepoName.

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):

aws cloudformation create-stack --stack-name ecs-stack-1648 --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/ecs-pipeline.json --region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" --parameters ParameterKey=RepositoryName,ParameterValue=YOURCCREPO ParameterKey=RepositoryBranch,ParameterValue=master ParameterKey=KeyName,ParameterValue=YOUREC2KEYPAIR ParameterKey=YourIP,ParameterValue=YOURIP/32 ParameterKey=ECSRepoName,ParameterValue=YOURECRREPO ParameterKey=ECSCFNURL,ParameterValue=NOURL ParameterKey=AppName,ParameterValue=app-name-1648

Outputs

Once the CloudFormation stack successfully launches, there are several outputs but the two most relevant are AppURL and CodePipelineURL. You can click on the AppURL value to launch the PHP application running on ECS from the ELB endpoint. The CodePipelineURL output value launches the generated pipeline from the CodePipeline console. See the screenshot below.

codepipeline_beanstalk_cfn_outputs  

Access the Application

Once the stack successfully completes, go to the Outputs tab for the CloudFormation stack and click on the AppURL value to launch the application.

codepipeline_ecs_php_app_before

Commit Changes to CodeCommit

Make some visual changes to the code and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to pink"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser.

codepipeline_ecs_php_app_after

Making Modifications

While the solution can work “straight out of the box”, if you’d like to make some changes, I’ve included a few sections of the code that you’ll need to modify.

configure-ecs.sh

The purpose of the configure-ecs.sh Bash script is to run the Docker commands to build, tag and push the image along with updating the existing CloudFormation stack to update the ECS service and task. The source for this bash script is here: https://github.com/stelligent/cloudformation_templates/blob/master/labs/ecs/configure-ecs.sh. I hard coded the ecs_template_url variable to a specific S3 location. You can either download the source file from one of these two locations: GitHub or S3 to make your desired modifications and then modify the ecs_template_url variable to the new location (presumably in S3).

config-template.xml

The purpose of the config-template.xml file is the Jenkins job configuration for the update ECS action. This XML file contains tokens that get replaced from the ecs-pipeline.json CloudFormation template with dynamic information like the CloudFormation stack name, account id, etc. This XML file is obtained via a wget command from within the template. The file is stored in S3 at https://s3.amazonaws.com/stelligent-training-public/public/jenkins/config-template.xml so you can modify the S3 location to your account while updating the CloudFormation template to point to the new location. In doing this, you can modify any of the behavior of the updates to the file when used by Jenkins.

Summary

In this series, you learned how to use CloudFormation to fully automate the provisioning of the Elastic Container Service along with a CodePipeline pipeline that uses CodeCommit as its version-control repository so that whenever a change is made to the Git repo, the changes are automatically applied to a PHP application hosted on ECS images.

By modeling your pipeline in CodePipeline you can apply even more stages and actions as part of your Continuous Delivery process so that it runs through all the tests and other checks enabling you to deliver changes to the production whenever there’s a business need to do so.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/ecs. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Notes

The sample solution currently only works in the us-east-1 AWS region. You will be charged for your AWS usage – including EC2, S3, CodePipeline and other services.

Resources

Here’s a list of some of the resources described or were influenced in this post: