In this first post of a series exploring containerized CI solutions, I’m going to be addressing the CI tool with the largest market share in the space: Jenkins. Whether you’re already running Jenkins in a more traditional virtualized or bare metal environment, or if you’re using another CI tool entirely, I hope to show you how and why you might want to run your CI environment using Jenkins in Docker, particularly on Amazon EC2 Container Service (ECS). If I’ve done my job right and all goes well, you should have run a successful Jenkins build on ECS well within a half hour from now!
For more background information on ECS and provisioning ECS resources using CloudFormation, please feel free to check out Stelligent’s two-part blog series on the topic.
An insanely quick background on Jenkins
Jenkins is an open source CI tool written in Java. One of its strengths is the very large collection of plugins available, including one for ECS. The Amazon EC2 Container Service Plugin can launch containers on your ECS cluster that automatically register themselves as Jenkins slaves, execute the appropriate Jenkins job on the container, and then automatically remove the container/build slave afterwards.
But, first…why?
But before diving into the demo, why would you want to run your CI builds in containers? First, containers are portable, which, especially when also utilizing Docker for your development environment, will give you a great deal of confidence that if your application builds in a Dockerized CI environment, it will build successfully locally and vice-versa. Next, even if you’re not using Docker for your development environment, a containerized CI environment will give you the benefit of an immutable build infrastructure where you can be sure that you’re building your application in a new ephemeral environment each time. And last but certainly not least, provisioning containers is very fast compared to virtual machines, which is something that you will notice immediately if you’re used to spinning up VMs/cloud instances for build slaves like with the Amazon EC2 Plugin.
As for running the Jenkins master on ECS, one benefit is fast recovery if the Jenkins EC2 instance goes down. When using EFS for Jenkins state storage and a multi-AZ ECS cluster like in this demo, the Jenkins master will recover very quickly in the event of an EC2 container instance failure or AZ outage.
Okay, let’s get down to business…
Let’s begin: first launch the provided CloudFormation stack by clicking the button below:
You’ll have to enter these parameters:
- AvailabilityZone1: an AZ that your AWS account has access to
- AvailabilityZone2: another accessible AZ in the same region as AvailabilityZone1
- InstanceType: EC2 instance type for ECS container instances (must be at least t2.small for this demo)
- KeyPair: a key pair that will allow you to SSH into the ECS container instances, if necessary
- PublicAccessCIDR: a CIDR block that will have access to view the public Jenkins proxy and SSH into container instances (ex: 8.8.8.8/32)
- NOTE: Jenkins will not automatically be secured by a user and password, so this parameter can be used to secure your Jenkins master by limiting network access to the provided CIDR block. If you’d like to limit access to Jenkins to only your public IP address, enter “[YOUR_PUBLIC_IP_ADDRESS]/32” here, or if you’d like to allow access to the world (and then possibly secure Jenkins yourself afterwards) enter “0.0.0.0/0“.
In a nutshell, this CloudFormation stack provisions a VPC containing a multi-AZ ECS cluster, and a Jenkins ECS service that uses Amazon Elastic File System (Amazon EFS) storage to persist Jenkins data. For ease of use, this CloudFormation stack also contains a basic NGINX reverse proxy that allows you to view Jenkins via a public endpoint. Both Jenkins and NGINX each consist of an ECS service, ECS task definition, and classic ELB (internal for Jenkins, and Internet-facing for the proxy).
In actuality, I think that a lot of organizations would choose to keep Jenkins internal in a private subnet and rely on a VPN for outside access to Jenkins. Instead, to keep things relatively simple, this stack only creates public subnets and relies on security groups for network access control.
Complications
There are a couple of reasons why running a Jenkins master on ECS is a bit complicated. One is that there is an ECS limitation that allows you to only associate one load balancer with an ECS service, and Jenkins runs as a single Java application that listens for web traffic on one port and for JNLP connections for build slaves on another port (defaults are 8080 and 50000, respectively). When launching a workload in ECS, using an Elastic Load Balancer for service discovery as I’m doing in this example, and provisioning using CloudFormation, you need to use a Classic Load Balancer that is listening on both Jenkins ports (listening on multiple ports is not currently possible with the recently revealed Application Load Balancer).
Another complication is that Jenkins stores its state in XML on disk, as opposed to some other CI tools that allow you to use an external database to store state (examples coming later in this blog series). This is why I chose to use EFS in this stack—when requiring persistent data in an ECS container, you must be able to sync Docker volumes between your ECS container instances because a container for your service can run on any container instance in the cluster. EFS provides a valuable solution to this issue by allowing you to mount an NFS file system that is shared amongst all the container instances in your cluster.
Coffee break!
Depending on how long you took to digest that fancy diagram and my explanation, feel free to grab a cup of coffee; the stack took about 7-8 minutes to complete successfully during my testing. When you see that beautiful CREATE_COMPLETE in the stack status, continue on.
Jenkins configuration
One of the CloudFormation stack outputs is PublicJenkinsURL; navigate to that URL in your browser and you should see the Jenkins home page (at least within a minute, once the instance is in service):
To make things easier, let’s click ENABLE AUTO REFRESH (in the upper-right) right off the bat.
Then click Manage Jenkins > Manage Plugins, navigate to the Available tab, and select these two plugins (you can filter the plugins by each name in the Filter text box):
- Amazon EC2 Container Service Plugin
- Git plugin
- NOTE: there are a number of “Git” plugins, but you’ll want the one that’s just named “Git plugin“
And click Download now and install after restart.
Select the Restart Jenkins when installation is complete and no jobs are running checkbox at the bottom, and Jenkins will restart after the plugins are downloaded.
When Jenkins comes back after restarting, go back to the Jenkins home screen, and navigate to Manage Jenkins > Configure System.
Scroll down to the Cloud section, click Add a new cloud > Amazon EC2 Container Service Cloud, and enter the following configuration (substituting the CloudFormation stack output where indicated):
- Name: ecs
- Amazon ECS Credential: – none – (because we’re using the IAM role of the container instance instead)
- Amazon ECS Region Name: us-east-1 (or the region you launched your stack in)
- ECS Cluster: [CloudFormation stack output: JenkinsConfigurationECSCluster]
- Click Advanced…
- Alternative Jenkins URL: [CloudFormation stack output: JenkinsConfigurationAlternativeJenkinsURL]
- Click ECS slave templates > Add
- Label: jnlp-slave-with-java-build-tools
- Docker Image: cloudbees/jnlp-slave-with-java-build-tools:latest
- Filesystem root: /home/jenkins
- Memory: 512
- CPU units: 512
And click Save at the bottom.
That should take you back to the Jenkins home page again. Now click New Item, and enter an item name of aws-java-sample, select Freestyle project, and click OK.
Enter the following configuration:
- Make sure Restrict where this project can be run is selected and set:
- Label Expression: jnlp-slave-with-java-build-tools
- Under Source Code Management, select Git and enter:
- Repository URL: https://github.com/awslabs/aws-java-sample.git
- Under Build, click Add build step > Execute shell, and set:
- Command: mvn package
Click Save.
That’s it for the Jenkins configuration. Now click Build Now on the left side of the screen.
Under Build History, you’re going to see a “pending – waiting for next available executor” message, which will switch to a progress bar when the ECS container starts. When the progress bar appears (it might take a couple of minutes for the first build while ECS downloads the Docker build slave image, but after this it should only take a few seconds when the image is cached on your ECS container instance), click it and you’ll see the console output for the build:
And Success!
Okay, Maven is downloading a bunch of dependencies…and more dependencies…and more dependencies…and finally building…and see that “Finished: SUCCESS?” Congratulations, you just ran a build in an ECS Jenkins build slave container!
Next Steps
One thing that you may have noticed is that we used a Docker image provided by CloudBees (the enterprise backers of Jenkins). For your own projects, you might need to build and use a custom build slave Docker image. You’ll probably want to set up a pipeline for each of these Docker builds (and possibly publish to Amazon ECR), and configure an ECS slave template that uses this custom image. One caveat: Jenkins slaves need to have Java installed, which, depending on your build dependencies, may increase the size of your Docker image somewhat significantly (well, relatively so for a Docker image). For reference, check out the Dockerfile of a bare-bones Jenkins build slave provided by the Jenkins project on Docker Hub.
Next Next Steps
Pretty cool, right? Well, while it’s the most popular, Jenkins isn’t the only player in the game—stay tuned for a further exploration and comparison of containerized CI solutions on AWS in this blog series!
Interested in Docker, Jenkins, and/or working someplace where your artful use of monkey GIFs will finally be truly appreciated? Stelligent is hiring!
Stelligent Amazon Pollycast
|