Development Acceleration Through VS Code Remote Containers: How We Leverage VS Code Remote Containers For Rapid Development of cfn_nag
This is the final blog post in a three-part series about the use of the Visual Studio Code Remote – Containers extension. The first post went over the benefits and general concepts of using a dev container to develop a project. The second showed some basic examples on how to get started along with introducing ways to customize the dev container. This last post shows several examples of recipes to further enhance the convenience of the developer experience. It will go into detail in each section about a particular topic and layout any items to look out for from a lessons-learned perspective.
Things To Look Out For
There are a few configuration items that should be kept in mind when creating a new container image for VS Code. The appPort
should be unique for each project. If there are several projects in an organization or even a personal user account, it is important to set these port numbers to a unique value for each project. If a developer is working on more than one project at a time or has multiple project windows open with VS Code, then there will be a port clash if these appPort
values are the same across the projects.
Additionally, it is important to pay attention to the caching type selected for volumes and mounts attached to the container. By default the container uses a cached
mount consistency. This is likely ok for most projects, but it wasn’t for cfn_nag. In this project the rule and spec files are updated, and then the rake
task launches another Docker
image to run tests against the project. With the cached
mount consistency, there were discrepancies with the results that came back from these tasks and tests. Once the workspace mount consistency was changed to consistent
, these discrepancies disappeared.
Non-Root User
It is best practice to not run services as root if it is not required. This can prevent anybody who gains access to the running container from potentially gaining root access to the local running host or installing other applications and services. Even if the container needs privileged access, this can still be achieved with a non-root user and sudo
.
In order to create a non-root user in the container, the Dockerfile
needs to be adjusted to include a few items.
To authorize sudo
access to a particular service for the new user, add the following lines to the Dockerfile
.
Persist Bash History
By default the VS Code command history is not saved between uses. This means that when VS Code is closed, or the container is rebuilt, the command history vanishes as well. There is a way, however, to save the command history between uses. A couple of configuration items are needed to achieve this. Keep in mind that all commands, even those containing sensitive information, will be saved in this volume unless a single space is prepended to the command.
Update the Dockerfile so that each time a `bash` command is used, the history is updated and saved.
Define a mount for the history to be saved in the devcontainer.json
file. Be sure to replace project_name
with something more appropriate to match the project. This name should be unique for each project that is implemented.
Volumes vs Mounts
The container allows local system files and directories to be mounted for development use. This can be useful for mounting special directories or data into the container itself. This can be achieved in 2 different ways, each with its own advantages. The preferred way of mounting a local file is to use the mounts property in the devcontainer.json
file. Three items are needed for this: source
, target
, and type
.
It’s also possible to use dynamic paths utilizing local environment variables to define the source path. For example, mounting the local home directory to the container. This home path will likely be unique for each developer contributing to the project.
The caveat with mounting local files and directories this way is that these paths have to exist on the local filesystem or the container launch will fail.
Another way to mount local paths into a container is to specify the -v flag in the runArgs
property. This will attach the path into the container as a volume instead of a mount. A benefit to using a volume instead of a mount is that a container launch won’t fail if the local path doesn’t already exist. This can be incredibly useful for projects where contributors are working on different operating systems, e.g. Linux & Windows.
The following example uses a volume to attach a path to a Linux-based OS, but the container won’t fail to launch on a Windows OS. The volume just won’t be attached or available for the Windows developer.
Local ENV Variables
It’s possible to pass local system environment variables into the container. The variables are passed into the development environment without altering the underlying container image. These variables can be set to specific values or reference local variables that are set on the user’s system.
For cfn_nag, the user’s present working directory is defined as a custom container variable. The rake
tasks are able to run correctly inside the container since some of these commands rely on local file paths.
Docker-In-Docker Development
It’s possible to run Docker commands within the VS Code development container. Docker running inside the container environment uses the local host’s Docker daemon. Without any additional configuration, the docker-in-docker commands will fail since the host’s Docker daemon isn’t correctly passed through to the container. This can be achieved by binding the path to the local Docker socket for the container image.
In addition to setting up a way for the container’s Docker daemon to connect to the host’s daemon, the container itself will need to have the docker CLI installed. In the Dockerfile
, be sure to install these items.
cfn_nag utilizes this Docker-in-Docker setup to run custom rake
tasks inside the container. There are tasks in place that run rspec
, end-to-end tests, and rubocop
commands against the codebase from within another Docker container. This ensures that the changes made in the code are run against the custom-defined container image built for the project. It is still achievable to run these additional docker commands from within the development container as long as proper configuration has been put in place to allow a connection with the local host’s Docker daemon.
Sign Git Commits with GPG
A huge benefit of developing in VS Code is the ability to run Git commands from within the IDE. This benefit can be passed down to the container as well so that this capability isn’t lost or hindered. It’s possible to retain access via SSH keys and securely sign Git commits with GPG. There are a couple of configurations and settings that need to be configured to get this set for the project.
First, in the Dockerfile
, make sure that the gnupg-agent
is installed. Then make sure that the GPG prompt window is properly configured for the container, along with creating a folder structure with proper permissions for the container user.
Second, in the devcontainer.json
file, ensure that the local file system’s SSH key directory and GPG directories and keys are associated to the container. These can be specified as runArgs
so that the container will still launch even if a contributor isn’t using SSH or GPG to interact with Git.
It’s important to note that each of these attached volumes is defined as read-only by using the :ro flag at the end of the line.
Publish a Docker Hub Image
Publishing a project’s custom container image to Docker Hub is another great way to help speed up the development process for joining and working on a project. Building the image with a custom GitHub Workflow Action reduces the time and burden from having the developer’s machine have to run the initial build of the container image when they start working on a project for the first time. Having this pre-built image greatly reduces the time it takes to start developing on a project, since VS Code can just pull the latest image available instead of having to build or rebuild on the local developer’s system. To do this, create a new workflow that will run on any changes made to the Dockerfile
. This workflow will need to checkout the current code, log in to Docker Hub, and build, tag, and push the new container image.
Once this is done, the project can be updated to just use the newly created Docker Hub image with the latest tag.
Fully Customized Dev Environments
This final post in the series has shown several ways to further customize and create a development container for a project. All of these items listed above and throughout the rest of the series are used for the cfn_nag project. Employing a fully customized development container in Visual Studio Code has drastically cut down the time it takes for a developer to contribute to the project. It takes away all of the guesswork with required software and dependency issues. It installs and configures an environment tailor-made for the project. The project can still be developed in a secure way by limiting the use of the root user and still allowing developers to connect to Git with SSH keys and sign their commits with GPG. Pre-existing development processes can still be used even if they are running in Docker if the socket and local path are forwarded to the dev container.
Please use the information and examples laid out in these three posts to create a fully customized dev environment for a new or existing project. These items have become invaluable for several Stelligent projects.
Stelligent Amazon Pollycast
|