A containerized Jenkins setup, with all the tools ready to go, is super useful for the DevOps developer. Jenkins makes it easy to parameterize and manage jobs, and so running numerous tests in parallel is efficient and profitable. Docker allows us to containerize such an environment. This blog post defines an all-in-one Jenkins container for use in DevOps development.
We define our Jenkins container in the Dockerfile, and then build and run it with:

Jenkins allows us to capture and retain output from job runs. We can share output as a link to other developers. We also want Jenkins to establish outgoing connections natively by the host (ie. container not using NAT). This is important to ensure our development jobs have access to services on the host, such as Hologram for aws metadata service. To do this, we can run the container with host networking by passing --net=host to the docker run command.
The container will open up tcp/8080 directly on the docker host, so it can be hit by others, given proper network/host firewalling rules.  Any ports opened by jobs will also open on the host IP, which may also be seen remotely (if multiple jobs try to open the same port, jobs will fail, so these must be run serially).  Remote users would need to have a VPN setup, or simply run the container on the cloud, and set TCP/IP rules to allow *only* each developer’s public IP. Please be aware that Remote Code Execution is a serious exploit and a publicly accessible Jenkins is that best way to facilitate that.  In addition to layer3/4 TCP/IP firewalling rules, layer 7 application security should always be enforced.  This means creating a Jenkins user for each person connecting and setting a secure password for each account.
We need to make our Jenkins home directory available to the container, so we bind the Jenkins directory from our host to /var/lib/jenkins via adding a volume directive to our docker run command:
--volume /path/to/jenkins/on/host:/var/lib/jenkins
We can easily port our environment between hosts by simply rsync’ing jenkins home. We can also encrypt the /path/to/jenkins/on/host, so that is must be specifically decrypted for the jenkins container to access it. When not using the container, the data can remain encrypted at rest. This can set up by making /path/to/jenkins/on/host a separate partition, which can then be encrypted.  The same process can be very useful when containerizing source control, like Gitlab.
As there are race conditions and possible corruption when running docker-in-docker, it may be better to simply bind the docker socket to the container. This allows containers launched by Jenkins to run on the host in parallel.  We also want to bind the vbox device so that jenkins can run vagrant with virtualbox backend. We add these volume directives to our docker run command:
--volume /dev/vboxdrv:/dev/vboxdrv
--volume /var/run/docker.sock:/var/run/docker.sock

We use the Centos7 base image for our Jenkins container, as we can trust the source origin, as much as is possible, and then we add Jenkins and all our tools on top in the Dockerfile:

Note that most of the above command run as root, however we need to set a few things as the Jenkins user.   This is necessary to have nix working properly for the jenkins user. In addition to installing nix, we also prepopulate the centos/7 vbox image into the container. This saves us time as vagrant vbox jobs in jenkins do not have to download when run.
We now have set up various tools commonly used, and can execute them within jenkins. We set the entrypoint for the container to launch Jenkins, the CMD> directive,…. and we are done.
If all works properly, you should see the Jenkins gui on http://localhost:8080.
A quick test job should show all the above tools are now available to jenkins jobs:

There are other tool-specific Docker containers available, such as the Ansible container, which may better suit certain needs. Other containerization technologies, like libvirt, may be preferable, but the DockerHub makes up for any difficiencies by its ease of container image sharing.
This setup is only a good idea for development. In a production CI environment, Jenkins workers should execute the jobs. Each job would be tied to a specifically locked down worker, based on job requirements. Installing tools in production on the Jenkins Master should not be necessary (other than monitoring tools like CloudWatch Agent or SplunkForwarder). Care should be taken to minimize the master’s attack surface. Docker containers are not meant to run a full interactive environment, with many child processes, so in production it may be that a better Jenkins experience is to be had by running Jenkins natively on bare metal, or a VM like AWS EC2, as opposed to inside a container.
Thanks for reading,
@hackoflamb
Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If you are Stelligent is hiring and we would love to hear from you!

Stelligent Amazon Pollycast
Voiced by Amazon Polly