In the last post we created two basic applications each with a basic shell script to automate deploying them into AWS. In this post we will continue refactoring those deploy scripts to work on getting them setup using a common pipeline. We are aiming to have the pipeline executable code configured through metadata allowing us to customize the pipeline through configuration. Although we are not using a build server one could easily be used to orchestrate the pipeline with the framework that we create here.
Our previous deploy script focused on deploying the application to AWS. If we look at the scripts from each repository’s /pipeline folder side-by-side we notice that they are almost identical. This seems like a good place to practice some code reuse. Let’s build out a pipeline that can allow us to share common code across the two applications. Once we complete this pipeline we will have common code that is flexible enough to be used across multiple applications.
We start by defining the steps of a pipeline from the existing deploy scripts. By reading the scripts we can identify that we get the code and gather variables, run some tests, create an AWS Cloud Formation (CFN) stack, and run a simple test against each deployed application.
Pipeline Step | App: blog_refactor_php | App: blog_refactor_nodejs |
SCM Polling | Variables, checkout code… | Variables, checkout code… |
Static Analysis | foodcritic() | foodcritic(), jslint() |
App Prerequisites | AWS Relational Database Service (RDS) creation, Chef runlist/attributes upload | Chef runlist/attributes upload |
App Deployment | AWS Auto Scaling Group (ASG) creation/app deployment | ASG creation/app deployment |
Acceptance Testing | curl endpoint – expect 200 | curl endpoint – expect 200 |
Now that we have the steps laid out we need to decide on a technology to implement this pipeline.
(Rake + Ruby) > Bash
We could continue to use bash for our pipeline code by adding in some structure rather than a flat script. Even though extracting the steps we have identified into function gains us some code reuse, we are still lacking features. By switching to a more advanced language we will gain library support we can leverage to avoid reinventing the wheel. Ruby and Rake seem like a good combination to build the pipeline since they seem to fulfill all of these requirements.
Rake is a well-established build tool that leverages the power of Ruby as a dynamic language. Besides defining tasks with prerequisites and parallel task execution, it offers us the ability to define tasks dynamically. Rake is task oriented which mirrors our pipeline “steps” idea pretty well. We can also get some flexibility out of Rake with the ability to run tasks directly from the command line or integrate the rake tasks into a CI/CD system. Since Rake is just Ruby anyway, integrating any classes we create into the tasks should be pretty simple as well.
To maximize code reuse in an easy, repeatable way you could create a Ruby gem to house common code. This is the approach we took, using metadata to dynamically define Rake tasks and wiring those tasks to reusable classes in a Ruby gem.
Not Your Parent’s Rakefile
Our approach uses Rake primarily as the connective tissue between a hypothetical CI/CD server and the underlying Ruby code that executes the pipeline step logic.
Normally, Rake tasks will be defined along with the code they execute, either in a Rakefile housing multiple tasks or split into separate .rake files. For our sample applications to leverage the pipeline gem we also use a Rakefile, but its job is mostly to read the application’s pipeline metadata and convey it to the gem’s Rakefile.
See the gist on github.
The gem’s Rakefile iterates over the steps array in the pipeline metadata, defining one Rake task per pipeline step. Each Rake task’s pipeline functionality is delegated to a dynamically-instantiated Ruby class (the ‘worker’ class in the code snippet below) assigned to that step in the metadata.
See the gist on github.
The @store variable is an instance of a parameter-store class; substitute with any parameter or credentials store in your implementations. Injected into each worker class, the store instance gives the worker access to any outputs from previous Rake tasks as well as the ability to create outputs for downstream Rake tasks.
The steps are just Ruby classes; your team codes them to match what your pipelines need to do. Similarly, your team should code the store class to match your team’s needs. Because of this, what we’re showing here is more in line with a framework to help your team maximize code reuse.
That seems like a sweet piece of tech, but what do we do when one of our applications has pipeline needs that don’t align perfectly with our gem’s worker class capabilities?
Down With Conformity!
For our post, we have refactored both our PHP and NodeJS applications to leverage the pipeline gem. While most of the pipeline gem’s worker classes are sufficient to support each application’s CD pipeline, our framework needed to be flexible enough to extend a worker class as well as to support fully-custom steps.
Extending standard steps via worker-class inheritance
The gem defines a class that performs some static code analysis as part of the CD pipeline, namely running foodcritic against the application’s pipeline cookbook. To support linting for NodeJS application, we can follow these steps so that our NodeJSApp application can provide its own pipeline customization.
First, we create a Ruby class (we called it `ExtendedStaticAnalysis`) in the NodeJS application’s pipeline/lib folder that inherits from the gem’s StaticAnalysis class. This gives us access to execute the foodcritic tests provided by the base worker class.
See the gist on github.
Next we add a method to ExtendedStaticAnalysis that performs the jslint analysis.
See the gist on github.
Finally, we change our application’s pipeline metadata so that it will instantiate this new class instead of the gem’s standard worker to perform that step. If we then run `rake –tasks` to show the steps our pipeline now supports, we’ll see a `build:commit:extended_static_analysis` task in the list! (see Figure 1)
Adding custom steps
As with extending step classes, your application’s pipeline can implement its own steps instead of using one of the built-in step implementations. If there’s a serious mismatch between what your application’s pipeline needs to do and what the gem provides, we can create new worker classes in the gem to support a whole new category of pipeline steps.
That’s a wrap
We now have a gem that can be reused and extended for our application pipelines and more. By encapsulating the common logic into Ruby classes in the gem, we’ve eliminated the repetitious code (and the temptation to copy/paste!) without taking away the flexibility to have custom logic in specific pipeline steps. When we expand our suite of applications, we rely on metadata and a small amount of custom code where necessary. Instead of needing to execute Rake tasks from a wrapper script or by hand (great for step development) you can integrate this with your CI/CD server.
See these GitHub repositories referenced by the article:
Authors: Jeff Dugas and Matt Adams
Interested in building out cutting edge continuous delivery platforms and developing a deep love/hate relationship with Rake? Stelligent is hiring!
Stelligent Amazon Pollycast
|