In the second post of this series, we focused on how to get our serverless application running with Lambda, API Gateway and S3. Our application is now able to run on a serverless platform, but we still have not applied the fundamentals of continuous delivery that we talked about in the first part of this series.
In this third and final part of this series on serverless delivery, we will implement a continuous delivery pipeline using serverless technology. Our pipeline will be orchestrated by CodePipeline with actions implemented in Lambda functions. The definition of the CodePipeline resource as well as the Lambda functions that support it are all defined in the same CloudFormation stack that we looked at last week.

Visualize The Goal

To help visualize what we are building, here is a picture of what the final pipeline looks like.
codepipeline
If you’re new to CodePipeline, let’s go over a few important terms:

  • Job – An execution of the pipeline.
  • Stage – A group of actions in the pipeline. Stages never run in parallel to each other and only one job can actively be running in a stage at a time. If a stage is currently running and a new job arrives at the stage it will block until the prior job completes. If multiple new jobs arrive, only the newest will be run while the rest will be dropped.
  • Action – A step to be performed. Actions can be in parallel or series to each other. In this pipeline, all our actions will be implemented by Lambda invocations.
  • Artifact – Each action can declare input and output artifacts that will be stored in an S3 bucket. These are objects that it will either expect to have before it runs, or objects that it will produce and make available after it runs.

The pipeline we have built for our application consists of the following four stages:

  • Source – The source stage has only one action to acquire the source code for later stages.
  • Commit – The commit stage has two actions that are responsible for:
    • Resolving project dependencies
    • Processing (e.g., compile, minify, uglify) the source code
    • Static analysis of the code
    • Unit testing of the application
    • Packaging the application for subsequent deployment
  • Acceptance – The acceptance stage has actions that will:
    • Update the Lambda function from latest source
    • Update S3 bucket with latest artifacts
    • Update API Gateway
    • End-to-end testing of the application
  • Production – The production stage performs the same steps as the Acceptance stage but against the production Lambda, S3 and API Gateway resources

Here is a more detailed picture of the pipeline. We will spend the rest of this post breaking down each step of the pipeline.
pipeline-overview

Start with Source Stage

Diagram Step: 1
The source stage only has one action in it, a 3rd party action provided by GitHub. The action will register a hook with the repo that you provide to kickoff a new job for the pipeline whenever code is pushed to the GitHub repository. Additionally, the action will pull the latest code from the branch you specified and zip it up into an object in an S3 bucket for later actions to reference.

{
  "Name": "Source",
  "Actions": [
    {
      "InputArtifacts": [],
      "Name": "Source",
      "ActionTypeId": {
        "Category": "Source",
        "Owner": "ThirdParty",
        "Version": "1",
        "Provider": "GitHub"
      },
      "Configuration": {
        "Owner": "stelligent",
        "Repo": "dromedary",
        "Branch": "serverless",
        "OAuthToken": "XXXXXX",
      },
      "OutputArtifacts": [
        {
          "Name": "SourceOutput"
        }
      ],
      "RunOrder": 1
    }
  ]
}

 
This approach helps solve a common challenge with source code management using Lambda. Obviously no one wants to upload code through the console, so many end up using CloudFormation to manage their Lambda functions. The challenge is that the CloudFormation Lambda resource expects your code to be zipped in an S3 bucket. This means you either need to use S3 as the “source of truth” for your source code, or have a process to keep it in sync fro the real “source of truth”. By building a pipeline, you can keep your source in GitHub and use the next actions that we are about to go through to deploy the Lambda function.

Build from Commit Stage

Diagram Steps: 2,3,4
The commit stage of the pipeline consists of two actions that are implemented with Lambda invocations. The first action is responsible for resolving the application dependencies via NPM. This can be an expensive operation taking many minutes, and is needed by many downstream actions, so the dependencies are zipped up and become an output artifact of this first action. Here are the details of the action:

  • Download & Unzip – Get the source artifact from S3 and unzip into a temp directory
  • Run NPM – Run npm install  the extracted source folder
  • Zip & Upload – Zip up the source folder with its dependencies in node_modules and upload the artifact to S3

Download the input artifact is accomplished with the following code:

var artifact = null;
jobDetails.data.inputArtifacts.forEach(function (a) {
  if (a.name == artifactName && a.location.type == 'S3') {
    artifact = a;
  }
});
if (artifact != null) {
  var params = {
    Bucket: artifact.location.s3Location.bucketName,
    Key: artifact.location.s3Location.objectKey
  };
  return getS3Object(params, destDirectory);
} else {
  return Promise.reject("Unknown Source Type:" + JSON.stringify(sourceOutput));
}

Likewise, the output artifact is uploaded with the following:

var artifact = null;
jobDetails.data.outputArtifacts.forEach(function (a) {
  if (a.name == artifactName && a.location.type == 'S3') {
    artifact = a;
  }
});
if (artifact != null) {
  var params = {
    Bucket: artifact.location.s3Location.bucketName,  
    Key: artifact.location.s3Location.objectKey
  };
  return putS3Object(params, zipfile);
} else {
  return Promise.reject("Unknown Source Type:" + JSON.stringify(sourceOutput));
}

 
Diagram Steps: 5,6,7
The second action in the commit stage is responsible for acquiring the source and dependencies, processing the source code, performing static analysis, running unit tests and packaging the output artifacts. This is accomplished by an Lambda action that invokes a Gulp task on the project. This allows the details of these steps to be defined in Gulp alongside the source code and able to change at a different pace than the pipeline. Here is the CloudFormation for this action:

{
  "InputArtifacts":[
    {
      "Name": "SourceInstalledOutput"
    }
  ],
  "Name":"TestAndPackage",
  "ActionTypeId":{
    "Category":"Invoke",
    "Owner":"AWS",
    "Version":"1",
    "Provider":"Lambda"
  },
  "Configuration":{
    "FunctionName":{
      "Ref":"CodePipelineGulpLambda"
    },
    "UserParameters": "task=package&DistSiteOutput=dist/site.zip&DistLambdaOutput=dist/lambda.zip”
  },
  "OutputArtifacts": [
    {
      "Name": "DistSiteOutput"
    },
    {
      "Name": "DistLambdaOutput"
    }
  ],
  "RunOrder":2
}

Notice the UserParameters  setting defined in the resource above. CodePipeline treats it as an opaque string that is passed into the Lambda function. I chose to use a query string format to pass multiple values into the Lambda function. The task  parameter defines what gulp task to run and the DistSiteOutput  and DistLambdaOutput  parameters tell the Lambda function where to expect to find artifacts to then upload to S3.
For more details on how to implement CodePipeline actions in Lambda, check out the entire source of these functions at index.js or read the post Mocking CodePipeline with Lambda.

Test in Acceptance Stage

Diagram Steps: 8,9,10,11
The Acceptance stage is responsible for acquiring the packaged application artifacts and deploying the application to a test environment and then running a Gulp task to execute the end-to-end tests against that environment. Let’s look at the details of each of these four actions in this stage:

  • Deploy App – The Lambda for the application is updated with the code from the Commit stage as a new version. Additionally, the test  alias is moved to this new version. As you may recall from part 2 this alias is used by the test stage of the API Gateway to determine which version of the Lambda function to invoke.

LambdaVersionAliases

  • Deploy API – At this point, this is a no-op. My goal is to have this action use a Swagger file in the source code to update the API Gateway resources, methods, and integrations. This will allow these API changes to affect the API Gateway on each build, where with this current solution would require an update of the CloudFormation stack outside the pipeline to change the API Gateway.
  • Deploy Site – This action publishes all static content (HTML, CSS, JavaScript and images) to a test S3 bucket. Additionally, it published a config.json file to the bucket that the application uses to determine the endpoint for the APIs. Here’s a sample of the file that is created:
{
  "apiBaseurl":"https://rue1bmchye.execute-api.us-west-2.amazonaws.com/test/",
  "version":"20160324-231829"
}
  • End-to-end Testing – This action invokes a Gulp action to run the functional tests. Additionally, it sets and an environment variable with the endpoint URL for the application for the Gulp process to test against.

Sidebar: Continuation Token

One challenge of using Lambda for actions is the current 300 second function execution timeout limit. If you have an action that will take longer than 300 seconds (e.g., launching a CloudFormation stack) you can utilize the continuation token. A continuation token is an opaque value that you can return to CodePipeline to indicate that you are not complete with your action yet. CodePipeline will then reinvoke your action, passing in the continuation token you provided in the prior invocation.
The following code uses the UserParameters  as a maximum number of attempts and uses continuationToken  as a number of attempts. If the action needs more time, it compares the maxAttempts  with the priorAttempts  and if there are still more attempts available, it calls into CodePipeline to signal success and passes a continuation token to indicate that the action needs to be reinvoked.

var jobData = event["CodePipeline.job"].data;
var maxAttempts = parseInt(jobData.actionConfiguration.configuration.UserParameters) || 0
var priorAttempts = parseInt(jobData.continuationToken) || 0;
if(priorAttempts < maxAttempts) {
    console.log("Retrying later.");
    var params = {
        jobId: event["CodePipeline.job"].id,
        continuationToken: (priorAttempts+1).toString()
    };
    codepipeline.putJobSuccessResult(params);
}

Deploy from Production Stage

The Production stage uses the same action definitions from the Acceptance stage to deploy and test the application. The only difference is that it passes in the production S3 bucket name and Lambda ARN to deploy to.
I spent time considering how to do a Blue/Green deployment with this environment. Blue/Green deployment is an approach to reduce deployment risk by launching a duplicate environment for code changes (green environment) and then cutting over traffic from the existing (blue environment) to the new environment. This also affords a safe and quick rollback by switching traffic back to the old (blue) environment.
I looked into doing a DNS based Blue/Green using Route53 Resource Records. This would be accomplished by creating a new API Gateway and Lambda function for each job and using weighted routing to move traffic over from the old API Gateway to the new API Gateway.
I’m not convinced this level of complexity would provide much value however, because given the way Lambda manages version and API Gateway manages deployments, you can easily roll changes back very quickly by moving the Lambda version alias. One limitation though is you cannot do a canary deployment with a single API Gateway and Lambda version aliases. I’m curious what your thoughts are on this, ping me on Twitter @Stelligent with #ServerlessDelivery.

Sidebar: Gulp + CloudFormation

You’ll also notice that there is a gulpfile.js in the dromedary-serverless repo to make it easier to launch and manage the CloudFormation stack. Specifically, you can run gulp pipeline:up  to launch the stack, gulp pipeline:wait  to wait for the pipeline creation to complete and gulp pipeline:status  to see the status of the stack and its outputs. This code has been factored out into its own repo named serverless-pipeline if you’d like to add this type of integration between Gulp and CloudFormation in your own project.

Try it out!

Want to experiment with this stack in your own account? The CloudFormation templates are available for you to run with the link below. First, you’ll want to fork the dromedary repository into your GitHub account. Then you’ll need to provide the following parameters to the stack:

  • Hosted Zone – you’ll need to setup a Route53 hosted zone in your account before launching the stack for the Route53 resource records to be created in.
  • Test DNS Name – a fully qualified hostname (within the Hosted Zone you created) for the test resources (e.g.test.example.com ).
  • Production DNS Name – a fully qualified hostname (within the Hosted Zone you created) for the production resources (e.g.prod.example.com ).
  • OAuth2 Token – your OAuth2 token (see here for details)
  • User Name – your GitHub username

stack-parameters

Conclusion

In this series, we have addressed how achieve the fundamentals of continuous delivery in a serverless environment. Let’s review those fundamentals and how we addressed them:

  • Continuous – We used CodePipeline to run a series of actions against every commit to our GitHub repository.
  • Quality – We built static analysis, unit tests and end-to-end tests into our pipeline and ran them for every commit.
  • Automated – The provisioning of the pipeline and the application was done from a single CloudFormation stack
  • Reproducible – Other than creation of a Route53 Hosted Zone, there were no prerequisites to running this CloudFormation stack
  • Serverless – All the tools chosen where AWS managed services, including Lambda, API Gateway, S3, Route53 and CodePipeline. No servers were harmed in the making of this series.

Please follow us on Twitter to be informed of future articles on Serverless Delivery and other exciting topics.  Also, keep your eye out for a new book set to be released later this year by Stelligent CTO, Paul Duvall, on Continuous Delivery in AWS – which will contain a chapter on serverless delivery.
 

Resources

Stelligent Amazon Pollycast
Voiced by Amazon Polly