If your application tech stack doesn’t need servers, why should your continuous delivery pipeline? Serverless applications deserve serverless delivery!
The software development discipline of continuous delivery has had a tremendous impact on decreasing the cost and risk of delivering changes while simultaneously increasing code quality by ensuring that software systems are always in a releasable state. However, when applying the tools and techniques that exist for this practice to serverless application frameworks and platforms, sometimes existing toolsets do not align well with these new approaches. This post is the first in a three-part series that looks at how to implement the same fundamental tenets of continuous delivery while utilizing tools and techniques that complement the serverless architecture in Amazon Web Services (AWS).
Here are the requirements of the serverless delivery pipeline:

  • Continuous – The pipeline must be capable of taking any commit on master that passes all test cases to production
  • Quality – The pipeline must include unit testing, static code analysis and functional testing of the application
  • Automated – The provisioning of the pipeline and the application must be done from a single CloudFormation command
  • Reproducible – The CloudFormation template should be able to run on a new AWS account with no additional setup other than creation of a Route53 Hosted Zone
  • Serverless – All layers of the application must run on platforms that meet the definition of serverless as described in the next section

What is Serverless?

What exactly is a serverless platform? Obviously, there is still hardware at some layer in the stack to host the application. However, what Amazon has provided with Lambda is a platform where developers no longer need to think about the following:

  • Operating System  – no need to select, secure, configure, administer or patch the OS
  • Servers– no cost risk of over-provisioning and no performance risk of under-provisioning
  • Capacity – no need to monitor utilization and scale capacity based on load
  • High Availability– compute resources are available across multiple AZs

In summary, a serverless platform is one in which an application can be deployed on top of without having to provision or administer any of the resources within the platform. Just show up with some code and the platform handles all the ‘ilities’.
app-overview
The diagram above highlights the components that are used in this serverless application. Let’s look at each one individually:

  • Node.js + Express – In the demo application for this blog series, we will be deploying a Node.js JavaScript application that leverages the Express framework into Lambda. There is a small bit of “glue” code that we will highlight later in the series to adapt the Lambda contract to the Express contract.
  • Lambda – This is where your application logic runs. You deploy your code here but do not need to specify the number of servers or size of the servers to run the code on. You only pay for the number of requests and amount of time those requests take to execute.
  • API Gateway – The API Gateway exposes your Lambda function at an HTTP endpoint. It provides capabilities such as authorization, policy enforcement, rate limiting and data transformation as a service that is entirely managed by Amazon.
  • DynamoDB – Dynamic data is stored in a DynamoDB table. DynamoDB is a NoSQL datastore that has both infinitely scalable storage capacity and throughput capacity which is entirely managed by Amazon.
  • S3 – All static content including HTML, CSS, images and client-side JavaScript is stored in an S3 bucket and served as a static website directly from S3.

The diagram below compares the pricing for running a Node.js application with Lambda and API Gateway versus a pair of EC2 instances and an ELB. Notice that for the m4.large, the break even is around two million requests per day. It is important to mention that 98.8% of the cost of the serverless deployment is the API Gateway. The cost of running the application out of Lambda is insignificant relative to the costs in using API Gateway.
serverless-app-cost
This cost analysis shows that applications/environments with low transaction volume can realize cost savings by running on Lambda + API Gateway, but the cost of API Gateway will become cost prohibitive at higher scale.

What is Serverless Delivery?

Serverless delivery is just the application of serverless platforms to achieve continuous delivery fundamentals. Specifically, a serverless delivery pipeline does not include tools such as Jenkins or resources such as EC2, Autoscaling Groups, and ELBs.
pipeline
The diagram above shows the technology used to accomplish serverless delivery for the sample application. Let’s look at what each component provides:

  • AWS CodePipeline – Orchestrates various Lambda tasks to move code that was checked into GitHub forward towards production.
  • AWS S3 Artifact Bucket – Each action in the pipeline can create a new artifact. The artifact becomes an output from the action in the pipeline and is stored in an S3 bucket to become an input for the next action.
  • AWS Lambda – You create Lambda functions to do the work of individual actions in the pipeline. For example, running a gulp task on a repository is handled by a Lambda function.
  • NPM and Gulp – NPM is used for resolving all the dependencies of a given repository. Gulp is used for defining the tasks of the repository, such as running unit tests and packaging artifacts.
  • Testing toolsJSHint is used for performing static analysis of the code, Mocha is a unit test framework for JavaScript and Chai is a BDD assertion library for describing the unit tests.
  • AWS CloudFormation – CFN templates are used to create and update all these resources in a serverless stack.

Serverless delivery for traditional architectures?

Although this serverless delivery architecture could be applied to more traditional application architectures (e.g., a Java application on EC2 and ELB resources) the challenge might be having pipeline actions complete within the 300 second Lambda maximum. For example, running Maven phases within a single Lambda function invocation, including the resolving of dependencies, compilation, unit testing and packaging would likely be difficult. There may be opportunities to split up the goals into multiple invocations and persist state to S3, but that is beyond the scope of this series.

Pricing

The pricing model for Lambda is favorable for applications that have an idle time and the cost grows linearly with the number of executions. The diagram below compares the pricing for running the pipeline with Lambda and CodePipeline against a Jenkins server running on an EC2 instance. For best performance, the Jenkins server ought to run on an m4.large instance, but just to highlight the savings, m3.medium and t2.micro instances were evaluated as well. Notice that for the m4.large, the break even happens after you are doing over 600 builds per day and even with a t2.micro, the break even doesn’t happen until well over 100 builds per day.
serverless-delivery-cost
In conclusion, running a continuous delivery pipeline with CodePipeline + Lambda is very attractive based on the cost efficiency of utility pricing, the simplicity of using a managed services environment, and the tech stack parity of using Node.js for both the application and the pipeline.

Stay Tuned!

Next week we will dive into part two of this series, looking at what changes need to be made for an Express application to run in Lambda and the CloudFormation templates needed to create a serverless delivery pipeline. Finally, we will conclude with part three going into the details of each stage of the pipeline and the Lambda functions that support them.

Resources