Blog

Continuous Delivery to S3 via CodePipeline and CodeBuild

In this blog post, you see a demonstration of Continuous Delivery of a static website to Amazon S3 via AWS CodeBuild  and AWS CodePipeline. At the conclusion, you will be able to provision all of the AWS resources by clicking a “Launch Stack” button and going through the AWS CloudFormation steps to launch a solution stack.

Using S3 is useful when you want to host static files such as HTML and image files as a website for others to access. Fortunately, S3 provides us the capability to configure an S3 bucket for static website hosting. For more information on manually configuring this for a custom domain, see Example: Setting up a Static Website Using a Custom Domain.

However, once you go through this process manually a few times, and if you’re like me, you’ll quickly grow tired of manually uploading new files, deleting old files, and setting the permissions for the files in the S3 bucket.

In this example, all the source files are hosted in GitHub and can be made available to developers. All of the steps in the process are orchestrated via CodePipeline and the build and deployment actions are performed by CodeBuild. The provisioning of all of the AWS resources is defined in a CloudFormation template.

By automating the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so without needing to repeatedly manually upload files to S3. Instead, you just commit the changes to the GitHub repository and the pipeline orchestrates the rest. While this is a simple example, you can follow the same model and tools for much larger and sophisticated applications.

Figure 1 shows this deployment pipeline in action.

devops-quick-demo

Figure 1 – Deployment Pipeline in CodePipeline to deploy a static website to S3

The remainder of this post describes how to configure the solution in your AWS account.

Prerequisites

Here are the prerequisites for this solution:

  • AWS Account – Follow these instructions to create an AWS Account: Creating an AWS Account and grant IAM privileges to access at least CodeBuild, CodePipeline, CloudFormation, IAM, and S3.
  • Fork GitHub Repo – Fork and clone your own stelligent/devops-essentials GitHub repository
  • OAuth Token – Create an OAuth token in GitHub and provide access to the admin:repo_hook and repo scopes.

To see these steps in more detail, go to devopsessentialsaws.com and go to section 2.1 Configure course prerequisites.

Architecture and Implementation

In Figure 2, you see the architecture for provisioning an infrastructure that launches a deployment pipeline to orchestrate the build the solution. You can click on the image to launch the template in CloudFormation Designer within your AWS account.

Figure 2 – CloudFormation Template for provisioning AWS resources

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML (or generated by more expressive domain-specific languages)
  • AWS CodePipeline – The CodePipeline stages and actions are defined in a CloudFormation template. This includes CodePipeline’s integration with CodeCommit, CodeBuild, and CloudFormation (For more information, see Action Structure Requirements in AWS CodePipeline).
  • GitHub – CodePipeline connects with an existing GitHub repository using the GitHub Source provider action.
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project to package and store the Lambda function
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline, CloudFormation, and other resources can access.

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including S3 and IAM.

S3 Buckets

There are two S3 buckets provisioned in this CloudFormation template. The SiteBucket resource defines the S3 bucket that hosts all the files that are copied from the downloaded source files from Git. The PipelineBucket hosts the input artifacts for CodePipeline that are referenced across stages in the deployment pipeline.

  SiteBucket:
    Type: AWS::S3::Bucket
    Properties:
      AccessControl: PublicRead
      BucketName: !Ref SiteBucketName
      WebsiteConfiguration:
        IndexDocument: index.html
  PipelineBucket:
    Type: AWS::S3::Bucket

IAM Role

The IAM role for CodePipeline provides the CodePipeline the necessary permissions for access to the necessary resource to deploy the static website resources.

  CodePipelineRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - codepipeline.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: codepipeline-service
        PolicyDocument:
          Statement:
          - Action:
            - codebuild:*
            Resource: "*"
            Effect: Allow
          - Action:
            - s3:GetObject
            - s3:GetObjectVersion
            - s3:GetBucketVersioning
            Resource: "*"
            Effect: Allow
          - Action:
            - s3:PutObject
            Resource:
            - arn:aws:s3:::codepipeline*
            Effect: Allow
          - Action:
            - s3:*
            - cloudformation:*
            - iam:PassRole
            Resource: "*"
            Effect: Allow
          Version: '2012-10-17'

CodePipeline

The CodePipeline pipeline CloudFormation snippet shown below defines the two stages and two actions that orchestrate the deployment of the static website. The Source action within the Source stage configures GitHub as the source provider. Then, it moves to the Deploy stage which runs CodeBuild to copy all the HTML and other assets to an S3 bucket that’a configured to be hosted as a website.

  Pipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      RoleArn: !GetAtt CodePipelineRole.Arn
      Stages:
      - Name: Source
        Actions:
        - InputArtifacts: []
          Name: Source
          ActionTypeId:
            Category: Source
            Owner: ThirdParty
            Version: '1'
            Provider: GitHub
          OutputArtifacts:
          - Name: SourceOutput
          Configuration:
            Owner: !Ref GitHubUser
            Repo: !Ref GitHubRepo
            Branch: !Ref GitHubBranch
            OAuthToken: !Ref GitHubToken
          RunOrder: 1
      - Name: Deploy
        Actions:
        - Name: Artifact
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          InputArtifacts:
          - Name: SourceOutput
          OutputArtifacts:
          - Name: DeployOutput
          Configuration:
            ProjectName: !Ref CodeBuildDeploySite
          RunOrder: 1
      ArtifactStore:
        Type: S3
        Location: !Ref PipelineBucket

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodePipeline – Customers can create new pipelines without incurring any charges on that pipeline for the first thirty calendar days. After that period, the new pipelines will be charged at the existing rate of $1 per active pipeline per month. For more information, see AWS CodePipeline pricing.
  • GitHub – No charge for public repositories
  • IAM – No additional cost.
  • S3 – If you launch the solution and delete the S3 bucket, it’ll be pennies (if that). See S3 Pricing.

The bottom line on pricing for this particular example is that you will charged no more than a few pennies if you launch the solution run through a few changes and then terminate the CloudFormation stack and associated AWS resources.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region

Step 2. Launch the Stack

Click on the “Launch Stack” button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 5 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

Here are the steps to test the deployment:

  1. Once the CloudFormation stack is complete, select checkbox next to the stack and go to the Outputs tab
  2. Click on the PipelineUrl link to launch the CodePipeline pipeline.
  3. Click on the SiteUrl link to launch the website that was configured and launched as part of the deployment pipeline
  4. From your Terminal, type (replacing YOURGITHUBUSERID with your GitHub userid):
    git clone https://github.com/YOURGITHUBUSERID/devops-essentials
  5. Make obvious visual changes to any of your local files (for example, change .bg-primary{color:#fff;background-color: in your forked repo version of devops-essentials/html/css/bootstrap.min.css) and type the following from your Terminal:
    git commit -am "add new files" && git push
  6. Go back to your pipeline in CodePipeline and see the changes successfully flow through the pipeline.

DevOps Essentials on AWS Video Course

devops_essentials_aws_cover_large

This and many more topics are covered in the DevOps Essentials on AWS Complete Video Course (Udemy, InformIT, SafariBooksOnline). In it, you’ll learn how to automate the infrastructure and deployment pipelines using AWS services and tools so if you’re some type of software orDevOps-focused engineer or architect interested in learning how to use AWS Developer and AWS Management Tools to create a full-lifecycle software delivery solution, it’s the course for you. The focus of the course is on deployment pipeline architectures and its implementations.

Additional Resources

Here are some of the supporting resources discussed in this post:

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Acknowledgements

My colleague Casey Lee created the initial CodePipeline/CodeBuild/S3 CloudFormation template that’s the basis for this solution.

Enforcing Compliance with AWS Organizations

You have a large organization with several development teams that work on various software projects that support your business. A year ago, you brought in a consultant that told you to use multiple AWS accounts because there were benefits to be gained. For example, using multiple accounts we can contain the damage from a possible security breach and isolate work by teams so that others don’t inadvertently disrupt that work. But there are also issues that we must deal with.
When a company has more than one AWS account and especially many AWS accounts, it becomes difficult to manage those accounts. How do we know that all teams are using good security policies? How do we take advantage of billing incentives for using more and more of an AWS resource? How do we manage the billing in general for all of those accounts? And if a company is in a business that requires them to comply with a set of standards such as PCI or HIPAA, how can we guarantee that teams are using only services that are certified compliant? And how can we automate the creation of new accounts in a way that they are properly configured, to begin with?

What Are AWS Organizations?

AWS organizations allow companies with multiple AWS accounts to manage those accounts from a billing and administrative perspective from a single root account. Why is this important? Until Organizations came along, I like to think of having multiple accounts as being like the Wild West. Each account was on its own and there was no way to manage all of them from one place. Users had no way to apply policies, manage permissions, or manage billing from a “company” perspective. AWS Organizations give us the tools we need to bring these accounts together and control them all in a predictable way.

Service Control Policies (SCPs)

Service Control Policies allow us to define the services that an account can access. In our case, we know that we want to allow access to only the services that are HIPAA compliant. Any service that isn’t compliant should not be allowed to be used by the teams. Using the root account, we can push this policy out to all accounts that we have within our organization.

Organizational Units (OUs)

Most organizations have accounts that have different requirements. Using the example above, some accounts may have to be HIPAA compliant while others may be used for other purposes and do not have to follow any guidelines. AWS Organizations gives us the ability to group accounts into Organizational Units.

Organizational Units allow us to split our accounts into separate groups and apply different policies to those groups. Continuing with the example from above, we can have an OU for all accounts that must be HIPAA compliant and an OU for accounts that are general purpose. All accounts in the HIPAA OU will be restricted to only the services that are HIPAA certified while the accounts in the general purpose OU have access to all AWS services. The rules that are applied to an OU even overrule account administrators. If an admin accidentally logs into an account and specifically sets permissions in that account to allow access to a service that has been restricted at the OU level, the OU rule that was applied to the account will still block that access.

OUs can be up to 5 levels deep. You can have multiple OUs inside of an OU. This allows even more granular control over accounts. As an example, let’s assume that some of our HIPAA accounts also handle patient transactional data. This means that we are dealing with both PCI and HIPAA data in those accounts. We can create an OU inside of our HIPAA account that restricts access to only services that are PCI compliant. The result is at the first level we have accounts that can only access HIPAA compliant services. In the PCI OU under the HIPAA OU we have accounts that can only access services which are HIPAA compliant AND PCI compliant.

One thing that must be remembered is that the root or “master” account cannot be restricted. Even if it is placed within an OU, none of the AWS services will be restricted to this account. Therefore, it is essential that the root account is not used by anyone other than the administrator of all accounts.

Account Creation Automation

It is often the case that a company will grow and will add teams as they are needed. These new teams will sometimes need their own set of accounts to work in to avoid disrupting the work of other teams. AWS Organizations provides the ability to automate this task. We can create an account, attach policies to this account, and add this account to the appropriate group all through the Organizations API. Not only is this useful for new teams, but it is also useful when developers need test accounts that need to be created quickly, then deleted when work within that account is finished.

How Does All of That Help Me?

Let’s take a look at an example and apply the tools above to solve the problems that companies with multiple accounts face. Let’s assume we have a health care company with a wide range of systems under their control. Some systems house identifiable patient data, which requires those systems to be HIPAA compliant, and some systems simply house generic data that can be used to generate high-level reports. The latter systems do not require any special treatment. One other platform the company has allows patients to log in and make payments. This platform allows users to store their credit card data for future transactions, which means the services they use must be PCI compliant.

Where Do We Start?

Before we begin we need to gather our requirements. We know that our company must be both HIPAA and PCI compliant so we can start by breaking the teams down into groups of standards they must follow.

Compliance Number of Teams
HIPAA 9
PCI 7
HIPAA and PCI (These overlap from the previous groups) 4
None 3

Once we have our teams broken out into groups, we need to know how many accounts each team has. Or this example, we are going to assume each team has 4 accounts:  Dev, Test, QA, PROD. Note that we have a group of 4 accounts that overlap in service restriction requirements. Unfortunately, Organizations will not allow an account to belong to 2 Organizational Units that are at the same hierarchical level. We will discuss the details of how to achieve this later when we create our OUs and begin adding accounts to them.

Once we have our accounts grouped we are ready to start planning our organization. The resulting Organization will have this overall structure:

cloudcraft - AWS Orgs

LIMITATION ALERT:

It’s worth noting at this point that AWS organizations treat accounts differently depending on how they were originally created. The Organizations API provides the ability to remove an account from the Organization, but only if that account was invited to join the organization. If the account was created by the organization, that account cannot be removed from the organization without deleting the account entirely. The Organizations API also does not provide the ability to delete an account, no matter how it was created. To delete an account, you must log into that account and do that manually. These limitations may influence how companies want to handle bringing accounts into an organization.

One other important fact we need to know is that the account that owns the user we use to create the Organization will become the master account. Make sure never to create an Organization from an account that needs to have policies applied to it. A master account will always have “root” access, even if it is moved to an Organizational Unit that restricts services. The services of the master account cannot be restricted and the wide-open policies will always override anything that is more restrictive.

Once we have our account information, let’s move on to creating the organization.

Creating an Organization

Before we begin, we need to make sure we have the AWS Command Line tools installed on the OS of your choice. Organizations can also be managed using the AWS SDK for your language of choice, but we’re going to use the command line tools for this example. Again, make sure we are using a user from the account we want to be the master. Make sure that user is configured with your CLI tools. Once our configs have been verified, we can issue the following command:

Minimum permissions for your user:

  • organizations:CreateOrganization
aws organizations create-organization --feature-set ALL

Notice that we are passing in a parameter to the create-organization command called “feature-set”. This tells AWS what control the organization will have over our accounts. There are 2 options we can pass in here:  ALL, CONSOLIDATED_BILLING. The ALL parameter value enables consolidated billing and also allows the organization to put policies in place that can restrict the services the account can access. This is the default value if this parameter is omitted. A value of CONSOLIDATED_BILLING will allow the new organization to consolidate the billing of all accounts under the master account. The Organization will not be allowed to restrict the services each account has access to. For our company, we need ALL functionality so we retain the ability to control access for some accounts to only HIPAA and PCI compliant services.

After running this command, we get back a response from AWS

{ "Organization": { "AvailablePolicyTypes": [{ "Status": "ENABLED", "Type": "SERVICE_CONTROL_POLICY" }], "MasterAccountId": "111111111111", "MasterAccountArn": "arn:aws:organizations::111111111111:account/o-exampleorgid/111111111111", "MasterAccountEmail": "bill@example.com", "FeatureSet": "ALL", "Id": "o-exampleorgid", "Arn": "arn:aws:organizations::111111111111:organization/o-exampleorgid" } }

We need to capture the “Id” value and keep that for future use.

Let’s Add Some Accounts

Inviting Accounts

Now that we have a newly created Organization, we can start adding our accounts to our organization. As mentioned above, there are 2 ways to add an account to an Organization. The first method and the one we’ll be using primarily for our example is to send an invitation to our accounts that already exist.

I want to reiterate that it’s important to note here that any account we invite to our Organization can be removed at any time. If we want our accounts tied to this Organization without the option to be removed (as a way of ensuring our policies are always in place), we need to create that account from within the Organization. Any resources would have to be migrated from the existing account to the new account.

To send an invitation to an existing account, we can issue the following command:

Minimum permissions for your users:

  • organizations:DescribeOrganization
  • organizations:InviteAccountToOrganization
aws organizations invite-account-to-organization --target '{"Type": "ACCOUNT", "Id": "ACCOUNT_ID_NUMBER"}'

We are passing in a data structure to the target parameter of the command. In this example, we are passing in the account ID. The key Type can also have values of EMAIL or ORGANIZATION. In those cases, we would set the Id to the appropriate value.

Another optional parameter that we could have passed as “notes”. If we want to include additional information in the email that is auto-generated by Organizations, we can pass that information using the “notes” parameter.

The response from this command should look like this:

{
  "Handshake": {
    "Action": "INVITE",
    "Arn": "arn:aws:organizations::111111111111:handshake/o-exampleorgid/invite/h-examplehandshakeid111",
    "ExpirationTimestamp": 1482952459.257,
    "Id": "h-examplehandshakeid111",
    "Parties": [{
      "Id": "o-exampleorgid",
      "Type": "ORGANIZATION"
    },
    {
      "Id": "juan@example.com",
      "Type": "EMAIL"
    }],
    "RequestedTimestamp": 1481656459.257,
    "Resources": [{
      "Type": "MASTER_EMAIL",
      "Value": "bill@amazon.com"
    },
    {
      "Type": "MASTER_NAME",
      "Value": "Org Master Account"
    },
    {
      "Type": "ORGANIZATION_FEATURE_SET",
      "Value": "FULL"
    },
    {
      "Type": "ORGANIZATION",
      "Value": "o-exampleorgid"
    },
    {
      "Type": "EMAIL",
      "Value": "juan@example.com"
    }],
    "State": "OPEN"
  }
}

Once again, we are interested in the “Id” value of the “Handshake” object. Each time we run the command to invite an account, we will receive this “Id” back in the response. We need to record that value for each account we invite so we can use it in the next step to accept the invitation.

Accepting Invitations

The process of inviting and adding an account to an organization is a “handshake” transaction. An invitation is sent to the account we want to add to our organization and the “owner” of that account must log in and accept that invitation. Fortunately for us, this can also be accomplished through the CLI. Again, we need to make sure our CLI is configured with a principal user that has the IAM permissions to accept that handshake. Once we have the CLI configured, we can issue the following command:

Minimum permissions for your user:

  • organizations:ListHandshakesForAccount
  • organizations:AcceptHandshake
  • organizations:DeclineHandshake
aws organizations accept-handshake --handshake-id HANDSHAKE_ID

The handshake ID that is being passed into this command was given to us in the response of the command to send the invitation.

Remember that we can also send and accept invitations through the console. For users with a few accounts, this may be acceptable. But if you are dealing with more than a few accounts you are definitely going to want to automate this process.

LIMITATION ALERT:

AWS has set a limit on a number of invitations that can be sent per day of 20. If you need to send more than that, contact customer support and they will up your limit.

Using Organizational Units

Here’s where the real power of Organizations starts to show. Now that we have our accounts added to the Organization we need to group them into OUs and restrict the services that can be used within those accounts. Before we started creating the Organization, we took the time to group our accounts by the compliance standard they needed to adhere to. We can use that information to help us create our OUs to move our accounts into. Looking at our chart we can see that we have four different types of accounts. We have HIPAA compliant, PCI compliant, HIPAA and PCI compliant, and accounts that require no restrictions at all. We are going to create three top-level OUs and one OU that is within either the PCI or the HIPAA OU. Because we are simply overlapping 2 sets of compliance standards, it really doesn’t matter which OU we use as a parent.

We’ll start by creating the three top-level OUs. We can issue the following commands to create those:

Minimum permissions for your user:

  • organizations:CreateOrganizationalUnit
aws organizations create-organizational-unit --parent-id PARENT_ORG_ID --name HipaaOU
aws organizations create-organizational-unit --parent-id PARENT_ORG_ID --name PciOU
aws organizations create-organizational-unit --parent-id PARENT_ORG_ID --name GeneralOU

We now have three top-level Organizational Units that we can add accounts to. We have already invited all existing accounts to our Organization. They reside at the top-level of our Org. To place those accounts into the proper OU we need to issue the “move” command on each account.

Minimum permissions for your user:

  • organizations:MoveAccount
aws organizations move-account --account-id ACCOUNT_ID --source-parent-id PARENT_ORG_ID --destination-parent-id OU_ID

We will need to issue this command for each account we need to move to an OU. We need to make sure we are using the correct destination ID to place the account into the proper OU.

We need to repeat the last 2 steps to create the sub OU for our overlapping HIPAA and PCI accounts. This time around the PARENT_ORG_ID will be changed from the ID of the organization itself to the ID of the organizational unit we want to create this sub OU in. We will create this OU within the HipaaOU that we created in the previous step.

And we can move those accounts that require both HIPAA and PCI compliance into this new OU using the same command we used to move the other accounts.

Service Control Policies

Simply moving accounts into OUs accomplishes nothing on its own. In order to take advantage of the power of these new OUs, we need to apply policies that will restrict the services that the accounts within the OU can access. At the time of this writing, Service Control Policies are the only policies that can be applied to an OU.

In order to apply a Service Control Policy to our account, we need to create a policy file that we can pass into the create-policy command. We could place this text within the command itself, but with the number of services we need to include and the fact that we have to escape characters, that approach is error-prone and very messy. Here’s what our policy file will look like

{ 
  “Version”: “2012-10-17”,
  “Statement”: [{
    “Effect”: “Allow”,
    “Action”: [
      “ec2:*”,
      “rds:*”,
      “dynamodb:*”
    ],
    “Resource”: “*”
  }]
}

In the above policy file, we are explicitly allowing a few services. There are many more HIPAA compliant services, but for the sake of this example, we are going to limit the policy to these three services.

TRAP FOR YOUNG PLAYERS:

It needs to be mentioned here that Service Control Policies which are applied to an OU will not grant any user any rights. We are not pushing this policy as a way to give each user in the accounts in the OU access to these services. This policy is in place as a way to restrict the permissions that can be applied to a user. And they will apply to all users, including administrators.

It’s also worth noting that the policies we are putting in place to restrict services assume that the “Allow *” policies have been removed from the root, OU, and individual accounts. If “Allow *” is still in place in any of these locations, the above policy will have no effect on the account(s) it is applied to.

We need to create two additional policy files, one for each additional OU type. Because we removed the “Allow *” policy from all accounts, OUs, and the root Organization, we will need to create a policy file for our GeneralOU that allows all services for that OU. We will reuse the PCI policy file for the sub OU that allows both HIPAA and PCI services.

Once we have our policy files in place, we can start creating those policies:

Minimum permissions for your user:

  • organizations:CreatePolicy
aws organizations create-policy --content file://allow_hipaa_policy.json --name AllowHipaaServices --type SERVICE_CONTROL_POLICY --description "This policy allows all HIPAA services"
aws organizations create-policy --content file://allow_pci_policy.json --name AllowPCIServices --type SERVICE_CONTROL_POLICY --description "This policy allows all PCI services"
organizations create-policy --content file://allow_all_policy.json --name AllowAllServices --type SERVICE_CONTROL_POLICY --description "This policy allows all services"

We have created three new policies that now need to be attached to our OUs.

Minimum permissions for your user:

  • organizations:AttachPolicy
aws organizations attach-policy --policy-id HIPAA_POLICY_ID --target-id HIPAA_OU_ID
aws organizations attach-policy --policy-id PCI_POLICY_ID --target-id HIPAA_OU_ID
aws organizations attach-policy --policy-id GENERAL_POLICY_ID --target-id HIPAA_OU_ID
aws organizations attach-policy --policy-id PCI_POLICY_ID --target-id HIPAA_PCI_OU_ID

Let’s take the time to examine what is happening here. We know that we have removed all permissions for all services for our root Organization, OUs, and accounts. We created policies that allow services that are compliant with HIPAA and PCI respectively. And we know that when we apply those policies to our OUs, the accounts within that OU will now have access to those services. In the case of the sub OU that allows both PCI and HIPAA services, the sub OU that has the overlapping accounts will inherit the services that are allowed by the HIPAA OU. Applying the AllowPCIServices policy to the sub OU will mean that in addition to the services it inherited, it will also be allowed to access the services which are PCI compliant.

Conclusion

Success! We have created a new Organization, invited our accounts into that organization, and grouped those accounts into OUs so we could ensure each group of accounts is compliant to the required standards. When dealing with a few accounts, working from the command line is fine. For larger amounts of accounts, it is highly recommended to script this process out.

AWS Organizations helps companies manage multiple accounts from a billing and policy standpoint. The use of Organizations helps reduce accidental security policies that violate compliance laws that companies may have to follow. It also reduces the time and effort required to create new accounts by providing an API that allows the auto-creation of new accounts with the correct policies already attached. Users can be restricted to the accounts they need access to and blocked from the accounts they don’t. All companies that have multiple accounts can benefit from the features provided by Organizations.

About Stelligent
Stelligent is an APN Advanced Consulting Partner and hold the AWS DevOps Competency. As a technology services company that provides DevOps Automation on Amazon Web Services (AWS) Cloud, we aim for “one-click deployment.” Our reason for being is to help our customers gain the ability to continuously deploy their software, when they want to, and with confidence. We’ve been providing DevOps Automation solutions on AWS since 2009. Follow @Stelligent on Twitter. Learn more at http://www.stelligent.com

Stelligent is an APN Launch Partner for the AWS Management Tools Addition to the AWS Service Delivery Program

Stelligent, an AWS Partner Network (APN) Advanced Consulting Partner specializing exclusively in DevOps Automation on the Amazon Web Services (AWS) Cloud, announce that it is a launch partner for four additional services in the AWS Service Delivery Program: AWS CloudFormationAWS CloudTrail, AWS Config, and Amazon EC2 Systems Manager. This means that Stelligent has demonstrated a successful track record of delivering specific AWS services and a demonstrated ability to provide expertise in a particular service or skill area.

800x200_Management-01 (1)

“The ability to deploy high-quality code in hours, not months, is something that we can help any company – including many in the Fortune 500 – achieve,” said Paul Duvall, Stelligent CTO and co-founder. “Using AWS Management Tools along with other AWS services we can drastically reducing our customers’ development times, while increasing the rate at which they can introduce new features.”

The AWS Service Delivery Program highlights APN Partners with a track record of delivering specific AWS services to customers. Attaining an AWS Service Delivery Distinction allows partners to differentiate themselves by showcasing to AWS customers areas of specialization.

The four AWS Management Tools included in the AWS Service Delivery Program include (Source AWS):

  • AWS CloudFormation – Create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.
  • AWS CloudTrail – Track user activity and API usage
  • AWS Config – Record and evaluate configurations of your AWS resources
  • Amazon EC2 Systems Manager – Easily configure and manage Amazon EC2 and on-premises systems

Stelligent uses these AWS Management Tools in creating DevOps Automation solutions for customers so they can release new features to users, on demand, and reduce the costs of delivering software by reducing overall lead time. Resulting benefits include the following:

● the ability to release software with every successful change
● significant reduction of cycle time
● increased confidence in what is deployed
● increase in ability to experiment
● reduction of overall costs

“We are proud to work with AWS to deliver DevOps Automation solutions to our customers, allowing them to release new features to users whenever they choose,” said Duvall. “Being a launch partner in the AWS Management Tools addition to the AWS Service Delivery Program means a lot to us — this is what we live and breathe, and we do so exclusively for our customers targeting AWS. We obsess over customers, and we obsess over applying what we believe are essential practices to achieve the aims of continuous delivery. This acknowledgement will help us reach still more customers who value that passion.”

About Stelligent
Stelligent is an APN Advanced Consulting Partner and hold the AWS DevOps Competency. As a technology services company that provides DevOps Automation on Amazon Web Services (AWS) Cloud, we aim for “one-click deployment.” Our reason for being is to help our customers gain the ability to continuously deploy their software, when they want to, and with confidence. We’ve been providing DevOps Automation solutions on AWS since 2009. Follow @Stelligent on Twitter. Learn more at http://www.stelligent.com

DevOps on AWS Radio: AWS CodePipeline and Amazon Alexa (Episode 11)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps on AWS news and discuss how to use AWS CodePipeline to deploy Amazon Alexa skill.

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. What was the “Use AWS CodePipeline to Deploy Amazon Alexa Skills” blog post ?
  2. What is AWS CodePipeline and what are its benefits? What are alternatives to using CodePipeline?
  3. How do you create a pipeline in CodePipeline?
  4. Which AWS services does CodePipeline integrate with? How about non-AWS tools and services
  5. How do you automate the provisioning of CodePipeline?
  6. Describe Amazon Alexa. What kinds of things can you do with Alexa? Which devices does it support
  7. Describe Lambda.
  8. How did you orchestrate CodePipeline to deploy a Lambda function?
  9. How did you configure Alexa to run the Lambda function?
  10. How can listeners learn more about this solution

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Screencast: Full-Stack DevOps on AWS Tool

Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers. However, there is a significant learning curve for developers to get their microservices deployed. mu is a full-stack DevOps on AWS tool that simplifies and orchestrates your software delivery lifecycle (environments, services, and pipelines). It is open source and available at http://getmu.io/. You can click the YouTube link below (we’ve also provided a transcript of this screencast in this post).

Let’s demonstrate using mu to deploy a Spring Boot application to ECS. So, we see here’s our micro service (and) we’ve already got our Docker file set up. We see that we’ve got our Gradle file so that we can compile the code and then we see the various classes necessary for the service; we’re using Liquibase for managing our database so that definition file is there; we’ve got some unit tests to find so when I will go ahead take a look at the Docker file and we see that it’s pretty straightforward: it builds from the Java image; all it does is takes the jar and adds it and then for the entry point, it just runs java -jar. So, we run mu init and that’s going to create two files for us: it’s going to create a mu.yml file which we see here and so we need to add some stuff to the file it generates – specifically, we want to specify Java 8 for the (AWS) CodeBuild image then we edit the buildspec file and tell it to use Gradle build for the build command. Buildspec is a standard code build  file for defining your project so if you see our two new files: buildspec.yml and mu.yml so we go ahead and commit those (and) push those up to our source repository in this case we’re using GitHub and then we run the command mu pipeline up and what that does is it creates a CloudFormation stack for managing our CodePipeline and our CodeBuild projects so it’s going to prompt us for the GitHub token this is the access token that you’ve defined inside GitHub so that CodePipeline can access your repository so we provide that token and then we see that it’s creating various things like IAM Roles for CodeBuild to do its business and (create) the actual CodeBuild project that’s going to be used there’s a quite a few different CodeBuild projects for building and testing and deploying so now we run the command mu service show and what that’s going to show us is that there is a pipeline now created we see it has started in the first step.

Let’s go ahead and open up (AWS CodePipeline) in the console and we see that, sure enough, (the Source stage of our pipeline) is running and then we see there’s a Build stage with the Artifact and Image actions in it – that’s where we compile and build our Docker image; there’s an acceptance stage and then a Production stage both of which do a deployment and then testing so jumping back over here to the command line we can run mu service show and we see that we are in the Source action currently running and that’s just going to take a minute before we now trigger the Artifact action of the Build stage and so that’s where we’re actually doing the compiling so the command we can run here (is) mu pipeline logs -f and we add the -f so that we follow the logs – what happens is all of the output from CodeBuild gets sent to CloudWatch Logs and so the mu pipeline logs command allows us to tail CloudWatch Logs and watch the activity in real time so we see that our Maven artifacts are being resolved for dependencies and then we see “build success”, so our artifact has been built and our unit tests have passed so it’s just going to take a second here for a CodeBuild to go ahead and upload the artifact and then trigger the pipeline to move to the next stage which is our Image (action) in the Image (action) what’s going to happen is it’s going to run Docker build against our artifact (and) create a Docker image; it’s then going to push that image up to ECR. It’s also going to create that ECS repository if it doesn’t exist yet through a CloudFormation stack so we go ahead and run mu pipeline logs and we could see the Image action running we see we’re pulling down the Docker base image that Java image and then there’s our docker build and now we’re pushing back up to ECR I’ll take just a minute to upload that new docker image with our Spring Boot application on and that’s completed successfully.

So now if we jump back over to mu service show just give it a second we should see that we will progress beyond the Build stage and into the Acceptance stage in the Acceptance Stage there will be two actions first a deploy action that’s going to use the image that was created and create a new ECS service for it and so that’s what we see going on here what you’ll notice in just a second right there what’s happening is first it’s making sure the environment is up-to-date so the ECS cluster and the auto scaling group for it and all the instances for ECS; it’s making sure that’s up to date; it’s also then updating any databases that are defined and then finally deploying the service and so we see here is there’s a CREATE_IN_PROGRESS –  the status of the deployment to the Dev environment is in progress so there’s a CloudFormation stack being deployed. I go ahead and run this command mu service logs just like there’s logs for the pipeline all the logs for your service are sent to CloudWatch Logs so here we’re watching the logs for our service starting up these are the Spring Boot output messages. If you used Spring Boot before it should look familiar but this is very helpful for troubleshooting an application being able to see if logs in real time.

So the deployment is complete – (based on) the logs we saw that it is up – so we’re going to go and look at the environment here. We do mu env list. We see the Dev environment and when we show it, we can see the EC2 instance associated with it and we also see the base URL for the ELB so I’m gonna go ahead and run a curl command against that – adding the bananas URI at the end of it and pipe that to jq just to make it look pretty and sure enough, there we see we get a successful response. So, our app has been deployed successfully and we see that we are in the Approval stage and it’s waiting for approvals so we’ve completed the Acceptance stage.

Let’s take a look at CloudFormation to just see what mu has created for us. So, we see there’s over just (CloudFormation) stacks over here. Remember everything that mu does is managed through CloudFormation there’s no other database or anything behind mu – it’s just native AWS resources so, for example, if we look at the VPC there for the in dev environment we see all the things you expect to see: routes, Network ACLs, subnets, there’s a NAT gateway defined, the VPC itself and then if we go to the cluster we see the Auto Scaling Groups for the ECS container instances, we see the load balancer – the application load balancer that’s defined for the environment, all the necessary security groups and then there’s some scaling policies to scale in or out on that auto scaling group based on how many tasks are currently running. This is the service –  the banana service has been deployed to the (dev environment), we see the IAM roles, Task Definition and whatnot for the service.

Now one thing we didn’t do previously was we didn’t do any testing so what you can do is you can go ahead and create this file called buildspec-test.yml and what will happen is anything that you define in this test YAML will be run as a test action after the deployments made if standard CodeBuild buildspec file so in this case we’re going to use a tool called Newman. Newman is a nodejs command-line tool for running postman collections. Postman is a tool that GitHub created for doing testing of restful APIs. So, our postman collections. so we’re configuring this to run Newman for our tests. We’ll have to make a change to mu.yml – we have to configure the acceptance environment to use a Node.js CodeBuild image so that’s what we’ve done there so with those two changes we should be able to run mu pipeline up that will update the CodeBuild project to use the nodejs image and then once our pipeline is up to date we’ll be able to commit our change which is that buildspec-test file and once we push that up the pipeline will start running again this time tests will actually run and we’ll get some assurance that the code is ready to go onto production. So to make that change, push it and then if we look at the service we’ll see that the source action has triggered and we’ll just let this run for a while. The whole pipeline is going to have to run but things like the artifact and image won’t really cause any change because we didn’t actually change the source code but those are go ahead and run anyway so we are now in being image stage we’re taking the new jar file and building a docker image from it pushing that up to ECR we’ve now hit the Deploy stage so the latest Docker image is being used for the ECS service.

Once that completes, we will run that mu pipeline logs again to watch the CodeBuild project doing the testing and here we go so we see the testing is running it’s going to run npm install to install our dependencies namely the Newman tool and then we see some results so i see status code 200 – that looks good. Under the fail column, I see a bunch of zeros which looks great and then I see build success so not only has our application been deployed to ECS but we’ve also been able to test it and and now those tests will be run as a part of every execution of the pipeline as part of every commit. Now the other thing that we’ll recognize here is this application that we built it’s managing our inventory of bananas but what it doesn’t have is a real database behind we’re just using the H2 database that is available with Java so let’s go ahead and make a change here let’s configure mu to actually have a real database so with mu that’s as easy is as defining a database you give it a name you could specify other things like a type and whatnot but will default with the Aurora RDS and then you’re going to want to pass some environment variables so we will pass the database connection information to our spring app since we’re using Spring data source it’s just a matter of finding these three environment variables and you’ll notice that the username password and the endpoint are not actually in the mu.yml file we don’t want those things in there what what will happen is mu will create those for us and then they will make them available As CloudFormation parameters that we can reference to the dollar-sign notation that CloudFormation offers. ok so now that we’ve got that change made, go and add our new file and commit the change and push it up which should trigger a new run of the pipeline and again we’ve got to go through all those earlier actions just to ultimately get to the deploy action where the RDS database will be created now again you can choose any RDS database type but we’re using Aurora by default.

Now one question is well how does the password get defined so the way this works is we use a service that AWS has called Parameter Store which manages secrets and when mu starts up it checks if there’s a password defined and if it’s not, it generates a random 16-character string, adds it to Parameter Store and then later on when it deploys the service it pulls it out of parameter store and passes it in as an environment variable. Those parameters are encrypted with KMS – a key management system so they are secure.

Ok, so looking at the logs now from the service these are our Spring Boot startup logs. What I’m expecting to see is that rather than seeing H2 as the dialect…there you go, we see MySQL is the dialect for the connection that tells me that Spring Boot detected our environment variables and Spring Boot recognized that we are in fact trying to talk to MySQL – let me go and highlight that here. So, this tells us that our application is in fact connecting to a MySQL database which is provided by RDS and wired up via mu. So, we can look at our service again and watch the pipeline run and we can get some confirmation that we need break anything because we have those tests as a part of our pipeline now so we’ll let this go and – our tests are running. Once that completes we will have a good good feeling that this change is ready to promote the production.

Well thanks for watching and check out https://getmu.io to learn more.

Use AWS CodePipeline to Deploy Amazon Alexa Skills

If you’ve done any experimentation with the Amazon Alexa voice service, you’ve probably learned that you can use AWS Lambda to write functions that can be executed from Alexa. As a developer, what’s exciting about this is that you can create your own custom Alexa skills to perform anything suited for voice-based computing.

You’ll probably also learn that there are numerous manual actions for integrating the various tools and code to deploy an Alexa skill. Once you create the Lambda function, you need to create a zip file with any packages that the function requires and upload it to Amazon S3. Moreover, you need to store code assets somewhere and then orchestrate the build and deployment of the function(s)  that are run by your Alexa skill. Finally, you need to configure the Alexa skill itself using the Alexa Skills Kit (ASK).

In this post, you will learn how to orchestrate the deployment of an Alexa skill (written in AWS Lambda) using the AWS Developer Tools suite – including AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline. The provisioning of all of the AWS resources is defined in an AWS CloudFormation template. By automating many of the actions and stages into a deployment pipeline, you can release changes to users in production whenever you choose to do so. You’ll see an example that walks you through the deployment process.

Figure 1 shows this deployment pipeline in action.

serverless-alexa-pipeline

Figure 1 – Deployment Pipeline in CodePipeline to deploy a Lambda function

Prerequisites

Here are the prerequisites for this solution:

Architecture and Implementation

All code assets are stored in AWS CodeCommit. We define a deployment pipeline in AWS CodePipeline to orchestrate the solution by configuring a Source action for CodeCommit, a build action with CodeBuild, and deploy actions for a CloudFormation changeset. The provisioning of AWS resources is defined in CloudFormation.

In Figure 2, you see the architecture for provisioning an infrastructure that launches a deployment pipeline to orchestrate the build and deployment of a Lambda function. You can click on the image to launch the template in CloudFormation Designer.

Figure 2 – CloudFormation Template for provisioning AWS resources

The components of this solution are described in more detail below:

  • AWS CloudFormation – All of the resource generation of this solution is described in CloudFormation which is a declarative code language that can be written in JSON or YAML
  • AWS CodePipeline – The CodePipeline stages and actions are defined in a CloudFormation template. This includes CodePipeline’s integration with CodeCommit, CodeBuild, and CloudFormation (For more information, see Action Structure Requirements in AWS CodePipeline).
  • AWS CodeCommit – Creates a CodeCommit Git repository using the AWS::CodeCommit::Repository
  • AWS CodeBuild – Creates a CodeBuild project using the AWS::CodeBuild::Project to package and store the Lambda function
  • AWS IAM – An Identity and Access Management (IAM) Role is provisioned using the AWS::IAM::Role resource which defines the resources that the pipeline, CloudFormation, and other resources can access.
  • AWS SNS – Provisions a Simple Notification Service (SNS) Topic using the AWS::SNS::Topic resource. The SNS topic is used by the CodeCommit repository for notifications.
  • Serverless Application Model (SAM) – “The AWS Serverless Application Model (AWS SAM) extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.” [Source]
  • Amazon Alexa – the voice service that powers Amazon Echo, provides capabilities, or skills, that enable users to interact with devices in a more intuitive way using voice.
  • AWS Lambda – The serverless function run by the Alexa skill.

The index.js file stored in CodeCommit is based on the alexa-skill-kit-sdk-factskill blueprint. As part of the deployment pipeline, the Node.js function gets packaged by CodeBuild and stored in S3. In the Deploy stage, it generates a CloudFormation template based on the Serverless Application Model and executes a change set on this template. The purpose of the generated template is to provision the Lambda function from the source in S3. Figure 3 illustrates how the Alexa skill interfaces with Lambda.

serverless-alexa-lambda

Figure 3 – Alexa Skills Kit and Lambda 

CloudFormation Template

In this section, I’ll highlight a few code snippets from the CloudFormation template that automates the provisioning of the AWS Developer Tools stack along with other resources including S3, IAM, and SNS.

IAM Role

There are several IAM roles that are provisioned in the CloudFormation template. The code shown in this section is for an IAM role that is used by the AWS Serverless Application Model for deploying the Lambda function run by the Alexa skill.

  LambdaTrustRole:
    Description: Creating service role in IAM for AWS Lambda
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Action: sts:AssumeRole
          Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
      ManagedPolicyArns:
      - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Path: "/"
      Policies:
      - PolicyDocument:
          Statement:
          - Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
            Effect: Allow
            Resource: "*"
          Version: '2012-10-17'
        PolicyName: MyLambdaWorkerPolicy
      RoleName: !Ref AWS::StackName
CodePipeline

The CodePipeline pipeline CloudFormation snippet shown below defines the three stages and four actions that orchestrate the deployment of the Lambda function used by the Alexa skill. The pipeline provisions a CodeCommit source action called Source. This repository is provisioned as part of the CloudFormation template. The TemplatePath: alexa-BuildArtifact::template-export.json property definition in the GenerateChangeSet deploy action configures the name of the SAM file that is generated to provision the Lambda function that was packaged and stored in the PackageExport build action. This file is used by SAM to transform into a CloudFormation template that is executed by the ExecuteChangeSet action.

  CodePipelineStack:
    Type: AWS::CodePipeline::Pipeline
    DependsOn:
    - CodeBuildWebsite
    - LambdaTrustRole
    Properties:
      RoleArn:
        Fn::Join:
        - ''
        - - 'arn:aws:iam::'
          - Ref: AWS::AccountId
          - ":role/"
          - Ref: CodePipelineRole
      Stages:
      - Name: Source
        Actions:
        - InputArtifacts: []
          Name: Source
          ActionTypeId:
            Category: Source
            Owner: AWS
            Version: '1'
            Provider: CodeCommit
          OutputArtifacts:
          - Name: MyApp
          Configuration:
            BranchName:
              Ref: RepositoryBranch
            RepositoryName:
              Ref: AWS::StackName
          RunOrder: 1
      - Name: Build
        Actions:
        - InputArtifacts:
          - Name: MyApp
          Name: PackageExport
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          OutputArtifacts:
          - Name: alexa-BuildArtifact
          Configuration:
            ProjectName:
              Ref: CodeBuildWebsite
          RunOrder: 1
      - Name: Deploy
        Actions:
        - InputArtifacts:
          - Name: alexa-BuildArtifact
          Name: GenerateChangeSet
          ActionTypeId:
            Category: Deploy
            Owner: AWS
            Version: '1'
            Provider: CloudFormation
          OutputArtifacts: []
          Configuration:
            ActionMode: CHANGE_SET_REPLACE
            ChangeSetName: pipeline-changeset
            RoleArn:
              Fn::GetAtt:
              - CloudFormationTrustRole
              - Arn
            Capabilities: CAPABILITY_IAM
            StackName:
              Fn::Join:
              - ''
              - - ""
                - Ref: AWS::StackName
                - "-"
                - Ref: AWS::Region
                - ""
            TemplatePath: alexa-BuildArtifact::template-export.json
          RunOrder: 1
        - ActionTypeId:
            Category: Deploy
            Owner: AWS
            Provider: CloudFormation
            Version: 1
          Configuration:
            ActionMode: CHANGE_SET_EXECUTE
            ChangeSetName: pipeline-changeset
            StackName:
              Fn::Join:
              - ''
              - - ""
                - Ref: AWS::StackName
                - "-"
                - Ref: AWS::Region
                - ""
          InputArtifacts: []
          Name: ExecuteChangeSet
          OutputArtifacts: []
          RunOrder: 2
      ArtifactStore:
        Type: S3
        Location: !Ref ArtifactBucket

Serverless Application Model

With the AWS Serverless Application Model (SAM), you can simplify the process of packaging a serverless application and deploying it with CloudFormation. The sam-template.yml below is a file that uses the SAM to define an Alexa skill function. Using the CloudFormation generate and execute change set defined in the CodePipeline provisioning in CloudFormation, this file transforms to a CloudFormation template. Fn::ImportValue pulls the export value from main CloudFormation template that provisions this solution.

AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31

Resources:
  AlexaSkillFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs4.3
      Role:
        Fn::ImportValue:
          !Join ['-', [!Ref 'AWS::StackName', 'LambdaTrustRole']]
      Events:
        AlexaSkillEvent:
          Type: AlexaSkill

Costs

Since costs can vary as you use certain AWS services and other tools, you can see a cost breakdown and some sample scenarios to give you an idea of what your monthly spend might look like. Note this will be dependent on your unique environment and deployment, and the AWS Cost Calculator can assist in establishing cost projections.

  • CloudFormation – No additional cost.
  • CodeBuild – CodeBuild charges per minute used. It comes with 100 minutes per month at no charge. For a simple execution of this demo, you can stay within the limits of the AWS Free Tier – please read about the Free Tier here. For more information, see AWS CodeBuild pricing.
  • CodeCommit – If used on a small project of less than six users, there’s no additional cost. See AWS CodeCommit Pricing for more information.
  • CodePipeline – Customers can create new pipelines without incurring any charges on that pipeline for the first thirty calendar days. After that period, the new pipelines will be charged at the existing rate of $1 per active pipeline per month. For more information, see AWS CodePipeline pricing.
  • Lambda –Considering you likely won’t have over 1M requests for this particular solution, there’s no cost. The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month. For more information, see AWS Lambda Pricing.
  • Alexa –There is no direct cost associated with using the Alexa service. If you’re using an Amazon Echo device, there is a one-time payment for the hardware and you’re charged every time your Lambda function is run (once it exceeds 1M free requests per month).
  • IAM – No additional cost.
  • SNS – Considering you likely won’t have over 1 million Amazon SNS requests for this particular solution, there’s no cost. For more information, see AWS SNS Pricing.

Deployment Steps

There are three main steps in launching this solution: preparing an AWS account, launching the stack, and testing the deployment. Each is described in more detail in this section. Please note that you are responsible for any charges incurred while creating and launching your solution.

Step 1. Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions. Part of the sign-up process involves receiving a phone call and entering a PIN using the phone keypad. Be sure you’ve signed up for the CloudFormation service.
  2. Use the region selector in the navigation bar of the console to choose the Northern Virginia (us-east-1) region

Step 2. Launch the Stack

Click on the “Launch Stack” button below to launch the CloudFormation stack. Before you launch the stack, review the architecture, configuration, and other considerations discussed in this post. To download the template, click here.

Time to deploy: Approximately 5 minutes

The template includes default settings that you can customize by following the instructions in this post.

Step 3. Test the Deployment

To test the deployment, you will need to configure the Alexa skill using the Amazon Developer Portal. You can use the Amazon Alexa Developer portal, a tool called Echosim, or an actual Amazon Echo device to test your skill.

Upload Code Assets to CodeCommit

  1. Once the CloudFormation stack is complete, select checkbox next to the stack and go to the Outputs tab
  2. Click on the PipelineUrl link to launch the CodePipeline pipeline. The Source action will be in a failed state.
  3. From the pipeline, click on the CodeCommit link and copy the command under “Clone your repository to your local computer and start working on code” to your clipboard.
  4. From your Terminal, paste the command contents to a computer for which you have configured a git client.
  5. Copy all the files from your locally-cloned Git repository (for https://github.com/stelligent/devops-essentials/tree/master/samples/serverless/alexa) to the CodeCommit repository you just cloned.
  6. From your Terminal, type
    git add .
  7. From your Terminal, type:
    git commit -am "add new files" && git push
  8. Go back to your pipeline in CodePipeline and see the changes successfully flow through the pipeline.

Configure and Test Alexa Skill

At this time, you can’t just click a “Launch Stack” button to deploy an Alexa skill. Separately, you need to configure the Alexa skill to define the intent schema, sample utterances and, most relevant, the Lambda function ARN that was deployed as part of the CodePipeline pipeline. To configure and test your Alexa skill, follow the steps defined below.

  1. Once your pipeline has successfully completed, go to https://developer.amazon.com/alexa and click the Sign In link
  2. Use your Amazon credentials to login to the Amazon Developer portal
  3. Select Alexa
  4. Under Alexa Skills Kit select Get Started
  5. Click Add a New Skill
  6. Enter a Name and Invocation Name and Choose Save
  7. Click Next
  8. In the Intent Schema text area, enter the contents from IntentSchema.json.
  9. In the Sample Utterances text area, enter the contents from SampleUtterances_en_US.txt.
  10. Click Next
  11. Choose the AWS Lambda ARN (Amazon Resource Name) radio button in the Service Endpoint Type section.
  12. Choose the North America checkbox
  13. Go to the Lambda console and choose the radio button next to the function that the CodePipeline pipeline generated. Then, choose the Actions button and select the Show ARN item and copy the contents that are displayed to your clipboard.
  14. Go back to the Amazon Developer Portal and paste your clipboard contents to the North America text box.
  15. Click Next
  16. In the Service Simulator section, enter “tell me a space fact” in the Enter Utterance text box and click Ask (the name of your skill). You should see a valid response in the Lambda Response text area. Go to SampleUtterances_en_US.txt for some other examples to simulate.

Alternatively, you can use a service the Echosim service to test your Alexa skill or an actual Amazon Echo device.

Deployment Pipeline

There are three stages and four actions that compose the pipeline that orchestrates the deployment of the Lambda function used by the Amazon Alexa service.

  • Source – In the single Source action, it uses the CodeCommit source action type to store all the code assets for the Alexa skill, infrastructure, and deployment pipeline
  • Build – In the single PackageExport action, it uses the CodeBuild build action type to package and store the Lambda function and associated files
  • Deploy
    • GenerateChangeSet – Uses the CloudFormation deploy action type to generate a change set for a CloudFormation template that defines the Lambda function
    • ExecuteChangeSet – Uses the CloudFormation deploy action type to generate a change set on the CloudFormation template to deploy the Lambda function

Figure 4 illustrates annotates the stages and actions of this deployment pipeline.

serverless-pipeline-annotated

Figure 4 – Annotated Deployment Pipeline for Solution

DevOps Essentials on AWS Complete Video Course

This and many more topics are covered in the DevOps Essentials on AWS Complete Video Course (release date: August 2017). In it, you’ll learn how to automate the infrastructure and deployment pipelines using AWS services and tools so if you’re some type of software or DevOps-focused engineer or architect interested in learning how to use AWS Developer Tools to create a full-lifecycle software delivery solution, it’s the course for you. The focus of the course is on deployment pipeline architectures and its implementations.

Additional Resources

You can also provide voice-enabled applications using Amazon Lex, Amazon Polly, and other AWS services – only without the “wake word” functionality.

Here are some of the supporting resources discussed in this post:

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

DevOps on AWS Radio: mu – DevOps on AWS tool (Episode 10)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps on AWS news and speak with Casey Lee from Stelligent about the open-source, full-stack DevOps on AWS tool called mu.

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. What is mu and what problem does it solve? What are its benefits?
  2. How does someone use mu (including prereqs)?
  3. What types of programming languages and platforms are supported?
  4. What types of AWS architectures does mu support (i.e. traditional EC2, ECS, Serverless, etc.)?
  5. Which AWS services are provisioned by mu?
  6. Does mu support non-AWS implementations?
  7. What does mu install on my AWS account?
  8. Describe mu’s support for configuration/secrets
  9. Extensibility?
  10. Price?
  11. What’s next on the mu roadmap?
  12. How can listeners learn more about mu?

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Introduction to NixOS

NixOS, and declarative immutable systems, are a great fit for CI/CD pipelines.  With the entire system in code, ensuring and auditing reproducible environments becomes easy.  Applications can also be “nixified,” so both system and application are fully declarative and in version control. The NixOS system is mounted read-only, which makes for a good fit in immutable autoscaling groups.  Instance userdata may contain a valid NixOS configuration, which is assumed on boot, so as to implement any necessary post-bake changes.

“NixOS. The Purely Functional Linux Distribution. NixOS is a Linux distribution with a unique approach to package and configuration management. Built on top of the Nix package manager, it is completely declarative, makes upgrading systems reliable, and has many other advantages.”
https://nixos.org/

There are four main parts to Nix and NixOS (Expressions, OS, Modules, and Tests). We will examine each one:

The Nix expression language

This is the “package manager” functionality of NixOS, which can be downloaded from here. Each open source project, including the Linux OS, has a “nix expression” that describes how it is to be built. This includes explicitly declaring each dependency. Each dependency is also declared with a nix expression, so the entire system is declarative. All Nix packages exist in the Nixpkgs Github. Here is the one for ElectricsSheep, a distributed screen saver for evolving artificial organisms:

The underlying mechanism of dependency and package management is a system of symbolic links. Packages are built and deployed in an immutable “nix store”. This read only location exists at /nix/store. In this location, the package name has a hash added, which is computed as a result of evaluating all build input dependencies. Therefore, we can have the same package available many times, each with a unique hash, and a unique version of dependencies. Symbolic links from /run/current-system/sw/bin/ to the hashed package name in nix store determine which package is called, as /run/current-system/sw/bin/ would be in the user’s $PATH.

Should any change to a packaging expression happen, all packages depending on it would rebuild. If a mistake is found, it is easy to revert the change to the dependency, and rebuild. Ensuring the reproducibility of system packages is a huge win, and this solution to “dependency hell” works very well in practice. The Nix package manager can be run on any Linux distribution, such as Fedora or Ubuntu, and also works on Darwin/OSX as well. Custom code can also be packaged with nix expressions, and so both the app and the OS are fully declarative and reproducible.

The NixOS Linux distribution

Nix packaging of open source software, including Linux kernel and boot processes, make up the NixOS Linux distribution. Official releases are available online. NixOS Channels are the method for specifying which version of NixOS is to be installed. NixOS is controlled by the /etc/nixos/configuration.nix file, which declaratively defines the NixOS environment, including defining which NixOS channel is to be used on the system. Whenever configuration.nix is updated, a nixos-rebuild switch can be executed, which switches to the new configuration immediately, as well as adding a new “generation” to Grub/EFI, so the new version can be booted. Here is an example configuration.nix:

As of NixOS 16.03, AWS EC2 Instance Metadata support is built in. However, instead of the usual Cloudinit directives, the NixOS instance expects the Userdata to be a valid configuration.nix. Upon boot, the system will “switch” to what is defined in the provided configuration.nix via EC2 Userdata. This allows for immutable declarative instances in AWS AutoScaling Groups.

NixOS Configuration Management Modules

NixOS modules define how services, via SystemD, are to be configured and run. Modules are written so that parameters can be set which correspond how the systemd process is to be run.

This is the Buildbot NixOS Module, which writes out buildbot configuration, based on module parameters, and then ensures the service is running:

NixOS Tests

NixOS tests are a mechanism to ensure NixOS expressions and modules are working as expected.  Virtual machine(s) are spun up so as to perform the declared tests. Currently VM’s spun up for testing use the QEMU hypervisor, although NixOS tests are moving to libvirt, ideally supporting autodetection of available system virtualization technologies. As an example, here is the buildbot continuous integration server test:

After downloading and building all dependencies, the test will perform a build that starts a QEMU/KVM virtual machine containing the nix system.  It is also possible to bring up this test system interactively to facilitate debugging. The virtual machine mounts the Nix store of the host which makes vm creation very fast, as no disk image needs to be created. These tests can then be implemented in a continuous integration environment such as buildbot or hydra.

In addition to declaratively expressing and testing system packages, applications can be nixified in the same way.  Application tests can then be written in the same manner, so both system and application can go thru a continuous integration pipeline. In a Docker microservices environment, where applications are defined as immutable containers, NixOS is the perfect host node OS, running the Docker daemon and Nomad, Kube, etc.

NixOS is in fast active development. Many users are also NixOS contributors, so most nix packaging of open source projects stay up-to-date. Unstable and release channels are available. Installation is very well documented online. The ability to easily “switch” between configuration versions, or “generations,” which include their own grub/efi boot entry, makes for a great workstation distro.  The declarative reproducibility of a long term stable release, with cloudinit userdata support, makes for a great server distribution.

Thanks for reading,
@hackoflamb

DevOps in AWS Radio: Goss (Episode 9)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps in AWS news and speak with Ahmed Elsabbahy about Goss, a ServerSpec alternative for testing server configuration.

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. What is Goss?
  2. Why was Goss created?
  3. Why would you use Goss over serverspec or other server configuration testing tools
  4. Where does Goss fit into a continuous delivery pipeline
  5. How does Goss work with AWS?
  6. How does Goss work with production testing?
  7. Where can we find out more information about Goss?

Additional Resources

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Devops Benefits of Infrastructure as Code

Infrastructure and operations as code is an essential practice for realizing the advantages of modern clouds.  For enterprises looking to migrate to Amazon Web Services, Azure, or Google Cloud Platform, scripted infrastructure and automation are the key first steps through which other devops practices become accessible.  This post will enumerate some key benefits that become possible once we embrace infrastructure as code practices.

By codifying our infrastructure, we enable better testing and quality control, more efficient and predictable deployments, and decreased recovery times. It provides improved testability and monitoring, lowers the cost of experimentation and innovation, makes deployments more predictable, and decreases the mean time to resolution (MTTR) for issues.

Automate your deployment and recovery processes

With infrastructure automation, reproducible environments become possible.  We can use the same automation scripts to deploy exact copies of production to development, test, and production environments.  With these consistent deployments, we are able to achieve the ever-elusive development-to-prod parity, finally putting an end to the “it worked on my machine!” problems.

The pinnacle of infrastructure automation is the Blue/Green deployment strategy.  This strategy enables zero downtime deployments and allows us to run live tests before releasing our changes to our users.  Blue/Green Deployments take advantage of our ability to run exact copies of our environments in parallel.  By controlling when traffic is routed to our new copy, we can defer a release until we are 100% confident that our new environment is ready.

In a Blue/Green deployment, we deploy a new, isolated copy of our environment.  This new, copied environment is named Green.  It is our release candidate.  It contains our new changes and is isolated from the live environment, which we call Blue.  The Green environment is configured for production and is ready to go live, but it is launched darkly – that is, no traffic is routed to Green.  

Next, we run our acceptance tests against the live Green environment.  If we encounter an error, we can simply log the error, remove the Green environment and go back to the drawing board.  No users ever know a difference, as we never routed any live traffic to Green.  

If our acceptance tests do pass, we promote our Green environment to be the new live environment.  This can be done by changing a DNS entry to point at the Green environment or by removing the Blue environment from our load balancer and adding the Green environment to the load balancer.  

The Blue environment does not need to be automatically deleted.  If necessary, we can keep it around for a short grace period in case we need to rollback.  The rollback process would consist of reversing the traffic swap to point back at Blue.

This is merely an overview of the Blue/Green deployment strategy.  For an in-depth discussion of Blue/Green techniques in an AWS environment, see the AWS whitepaper on the topic.

Rollback with the same tested processes

Our deployment scripts are also our rollback scripts.  Because our deployments are automated, we can reproduce the state of the infrastructure any number of times by simply re-running the deployment scripts with the same inputs.  With our codified infrastructure, we can reach back in version control to grab any commit since the repository began.  By reverting to the desired commit and re-running our deployment scripts, we can restore the state of the infrastructure as it was on any given day.

Don’t Repair, Redeploy

Server time is cheap, but engineer time is expensive.  Further, troubleshooting server performance issues can be very time-consuming.  For these reasons, it no longer makes sense to troubleshoot and repair our servers.  Rather it is now more economical to destroy the old server instance and replace it with a new, working copy.

We can use our automated deployment scripts to deliver working servers to replace broken and impaired servers.  We can now follow an immutable infrastructure pattern, in which nothing ever changes on a server after it is deployed.  This helps avoid the problem of configuration drift and also greatly simplifies our operations.  Now, the only repair operation is to redeploy the service.  A service crashed?  Redeploy.  Having performance issues on a host?  Redeploy.  Lost connectivity to a host?  Redeploy.

Focus on Mean Time To Recovery

They say you can’t fix what you don’t measure, but it’s important to choose the right metrics to measure and improve upon. To traditional IT organizations, the key metric is Mean Time Between Failures (MTBF).   Server uptime is paramount, and this is the metric that gets optimized.  This leads to a reluctance to accept changes, as each change can potentially introduce a failure.  Moreover, configuration changes are generally made manually by administrators.  This leads to long-running  snowflake servers which are virtually impossible to reproduce.  This presents a very nasty challenge in restoring service availability when the inevitable failures do occur.  

Failure of an IT component means the organization is losing money.  But failures do and will happen.  In a cloud-native world, we solve this problem by turning it on its head.  Rather than trying to avoid failures, devops organizations accept that failures are a part of life and design our applications to minimize the impact of those failures by recovering gracefully.  To accomplish this, we focus on Mean Time To Recovery (MTTR) as our key metric.  By minimizing the time it takes to recover from failure, we minimize the impact of each failure.  Optimizing for MTTR necessitates automation of our processes.  Our recovery processes must be consistent and reliable.

Practice makes perfect

If we want to improve at anything, we have to practice.  Recovering from failures is no different.  We do not want the first test of our recovery processes to be during an actual disaster.  Rather, we want to test our recovery process numerous times before we actually need it.  Doing so gives us confidence that our recovery process will work as intended and restore the availability of our service.

Traditionally, creating an isolated environment for disaster recovery was too cost-prohibitive and time-consuming to be a feasible strategy.  The only way to test our process was to actually have a disaster.  However, with modern cloud environments, we no longer have this limitation.  Creating a new environment is an api call away.  Once we’ve codified our infrastructure, we can create a copy of our production environment by running the same code we used to create production.

We create our new copy environment to be totally isolated from our production environment.  We are now free to simulate disasters and test our recovery processes.  This can be done regularly in a low stress environment, allowing our engineering teams to troubleshoot and strategize without the added pressure of an actual outage.

Each time the process fails, we learn a little bit more.  We can then use this information to correct the problem and improve our automated recovery scripts.  At the very least, we document the known issues and add the solutions to common problems in our standard procedures.  

We should practice these failures regularly.  By the time an actual disaster occurs, we should have multiple practice runs of recovering from the disaster, as well as hundreds or even thousands of trial runs from the deployments being run with the same scripts.

Use testing tools to verify your infrastructure

With our infrastructure codified and our restore process automated, the next step is to design a set of automated tests that will verify.  Because we now think of our infrastructure as a software application, we should use software testing tools to test our infrastructure.  By using tools like Python’s Behave or Ruby’s Rspec, we can test that our service is behaving as expected.

These tests don’t have to be complicated, and can start out very simply.  The first test can just be “Is the service up and reachable?”  After all, this is the entire goal of the software project – if it is not up and working, it is of no use.  Then we can start to further refine our tests to include those behaviors we expect a healthy service to exhibit.  A good starting point is to hit each of our service’s endpoints in an automated fashion.  These basic tests give us a high level of certainty that the app is behaving as expected, and we can add more detailed testing to test for specific failure cases.

As we practice our failures and recovery process, we will find new issues that can cause our system to not operate correctly.  As these issues are discovered, we test for them and add those tests to our suite of automated tests.  These tests also double as regression tests.  When a new feature is added and a test breaks, we know exactly which change caused the service tests to fail.  As time goes by, we build a more comprehensive test suite and incrementally increase our confidence in our recovery process.  

Hook your tests into your monitoring system

Our automated test suite gives us confidence that our service is behaving correctly during deployment and recovery.  In these situations, the conditions are known and assumptions can be hidden.  But what happens when our service is used in unexpected ways, as is bound to happen when real users start using the application?  We can hook these tests into our monitoring systems and run these tests on a periodic basis.  In this way, we can be alerted the moment something goes wrong.  Running our tests in this fashion will allow us to test against real world scenarios.  This is our first line of defense in detecting real world errors.  

Conclusion

You don’t have to be a Netflix or Airbnb company to take advantage of devops practices. Fortune 500 companies and government agencies are adopting these patterns so that they can recover from failure more quickly, deploy more often, deploy more quickly.  The prerequisite to practicing these modern devops techniques is Infrastructure as Code.  If your organization is looking to begin capitalizing on the benefits of modern clouds but does not know where to start, codifying infrastructure should be the first step.  

Are you looking for guidance transitioning your legacy apps to AWS?  Stelligent can help!  Stelligent Migrate is our service in which we help facilitate the migration of your enterprise workloads to AWS.  If you have any questions or are interested in how Stelligent can help, please reach out to sales@stelligent.com!