On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.
The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:
Networks (e.g. VPC)
Compute (EC2, Containers, Serverless, etc.)
Storage (e.g. S3, EBS, etc.)
Database and Data (RDS, DynamoDB, etc.)
Organizational and Team Structures and Practices
Team and Organization Communication and Collaboration
Version control systems and processes
Orchestration of software delivery workflows
Execution of these workflows
Application/service Architectures – e.g. Microservices
Automation of Build and deployment processes
Automation of testing and other verification approaches, tools and systems
mu is a tool that Stelligent has created to make it simple and cost-efficient for developers to use AWS as the platform for running their microservices. In this first post of the blog series focused on the mu tool, we will be introducing the motivation for the tool and demonstrating the deployment of a microservice with it.
The architectural pattern of decomposing an application into microservices has proven extremely effective at increasing an organization’s ability to deliver software faster. This is due to the fact that microservices are independently deployable components that are decoupled from other components and highly cohesive around a single business capability. Those attributes of a microservice yield smaller team sizes that are able to operate with a high level of autonomy to deliver what the business wants at the pace the market demands.
What’s the catch?
When teams begin their journey with microservices, they usually face cost duplication on two fronts: infrastructure and re-engineering. The first duplication cost is found in the “infrastructure overhead” used to support the microservice deployment. For example, if you are deploying your microservices on AWS EC2 instances, then for each microservice, you need a cluster of EC2 instances to ensure adequate capacity and tolerance to failures. If a single microservice requires 12 t2.small instances to meet capacity requirements and we want to be able to survive an outage in 1 out of 4 availability zones, then we would need to run 16 instances total, 4 per availability zone. This leaves an overhead cost of 4 t2.small instances. Then multiply this cost by the number of microservices for a given application and it is easy to see that the overhead cost of microservices deployed in this manner can add up quickly.
Containers to the rescue!
An approach to addressing this challenge of overhead costs is to use containers for deploying microservices. Each microservice would be deployed as a series of containers to a cluster of hosts that is shared by all microservices. This allows for greater density of microservices on EC2 instances and allows the overhead to be shared by all microservices. Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers. ECS leverages many AWS services to provide a robust container management solution. Additionally, a developer can use tools like CodeBuild and CodePipeline to create continuous delivery pipelines for their microservices.
That sounds complicated…
This approach leads to the second duplication cost of microservices: the cost of “reengineering”. There is a significant learning curve for developers to learn how to use all these different AWS resources to deploy their microservices in an efficient manner. If each team is using their autonomy to engineer a platform on AWS for their microservices then a significant level of engineering effort is being duplicated. This duplication not only causes additional engineering costs, but also impedes a team’s ability to deliver the differentiating business capabilities that they were commissioned to do in the first place.
Let mu help!
To address these challenges, mu was created to simplify the declaration and administration of the AWS resources necessary to support microservices. mu is a tool that a developer uses from their workstation to deploy their microservices to AWS quickly and efficiently as containers. It codifies best practices for microservices, containers and continuous delivery pipelines into the AWS resources it creates on your behalf. It does this from a simple CLI application that can be installed on the developer’s workstation in seconds. Similar to how the Serverless Framework improved the developer experience of Lambda and API Gateway, this tool makes it easier for developers to use ECS as a microservices platform.
Additionally, mu does not require any servers, databases or other AWS resources to support itself. All state information is managed via CloudFormation stacks. It will only create resources (via CloudFormation) necessary to run your microservices. This means at any point you can stop using mu and continue to manage the AWS resources that it created via AWS tools such as the CLI or the console.
The mu tool consists of three main components:
Environments – an environment includes a shared network (VPC) and cluster of hosts (ECS and EC2 instances) necessary to run microservices as clusters. The environments include the ability to automatically scale out or scale in based on resource requirements across all the microservices that are deployed to it. Many environments can exist (e.g. development, staging, production)
Services – a microservice that will be deployed to a given environment (or environments) as a set of containers.
Pipeline – a continuous delivery pipeline that will manage the building, testing, and deploying of a microservice in the various environments.
mu will use the same mechanism as aws-cli to authenticate with the AWS services. If you haven’t configured your AWS credentials yet, the easiest way to configure them is to install the aws-cli and then follow the aws configure instructions:
$ aws configureAWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLEAWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYDefault region name [None]: us-west-2Default output format [None]: json
Setup your microservice
In order for mu to setup a continuous delivery pipeline for your microservice, you’ll need to run mu from within a git repo. For this demo, we’ll be using the stelligent/banana-service repo for our microservice. If you want to follow along and try this on your own, you’ll want to fork the repo and clone your fork.
Let’s begin with cloning the microservice repo:
$ git clone email@example.com:myuser/banana-service.git$ cd banana-service
Next, we will initialize mu configuration for our microservice:
$ mu init --envWriting config to '/Users/casey.lee/Dev/mu/banana-service/mu.yml'Writing buildspec to '/Users/casey.lee/Dev/mu/banana-service/buildspec.yml'
We need to update the mu.yml that was generated with the URL paths that we want to route to this microservice and the CodeBuild image to use:
Make sure you have GitHub token with repo and admin:repo_hook scopes to provide to the pipeline in order to integrate with your GitHub repo. Then you can create the pipeline:
$ mu pipeline upUpserting Bucket for CodePipelineUpserting Pipeline for service 'banana-service' ... GitHub token: XXXXXXXXXXXXXXX
Now that the pipeline is created, it will build and deploy for every commit to your git repo. You can monitor the status of the pipeline as it builds and deploys the microservice:
$ mu svc showPipeline URL: https://console.aws.amazon.com/codepipeline/home?region=us-west-2#/view/mu-pipeline-banana-service-Pipeline-1B3A94CZR6WH+------------+----------+------------------------------------------+-------------+---------------------+| STAGE | ACTION | REVISION | STATUS | LAST UPDATE |+------------+----------+------------------------------------------+-------------+---------------------+| Source | Source | 1f1b09f0bbc3f42170b8d32c68baf683f1e3f801 | Succeeded | 2017-04-07 15:12:35 || Build | Artifact | - | Succeeded | 2017-04-07 15:14:49 || Build | Image | - | Succeeded | 2017-04-07 15:19:02 || Acceptance | Deploy | - | InProgress | 2017-04-07 15:19:07 || Acceptance | Test | - | - | - || Production | Approve | - | - | - || Production | Deploy | - | - | - || Production | Test | - | - | - |+------------+----------+------------------------------------------+-------------+---------------------+Deployments:+-------------+-------+-------+--------+-------------+------------+| ENVIRONMENT | STACK | IMAGE | STATUS | LAST UPDATE | MU VERSION |+-------------+-------+-------+--------+-------------+------------++-------------+-------+-------+--------+-------------+------------+
You can also monitor the build logs:
$ mu pipeline logs -f[Container] 2017/04/07 22:25:43 Running command mu -c mu.yml svc deploy acceptance [Container] 2017/04/07 22:25:43 Upsert repo for service 'banana-service' [Container] 2017/04/07 22:25:43 No changes for stack 'mu-repo-banana-service' [Container] 2017/04/07 22:25:43 Deploying service 'banana-service' to 'dev' from '324320755747.dkr.ecr.us-west-2.amazonaws.com/banana-service:1f1b09f'
Once the pipeline has completed deployment of the service, you can view logs from service:
$ mu service logs -f acceptance
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' | ____| .__|_| |_|_| |_\__, | / / / /
:: Spring Boot :: (v1.4.0.RELEASE) 2017-04-07 22:30:08.788 INFO 5 --- [ main] com.stelligent.BananaApplication : Starting BananaApplication on 6a4d5544d9de with PID 5 (/app.jar started by root in /) 2017-04-07 22:30:08.824 INFO 5 --- [ main] com.stelligent.BananaApplication : No active profile set, falling back to default profiles: default 2017-04-07 22:30:09.342 INFO 5 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@108c4c35: startup date [Fri Apr 07 22:30:09 UTC 2017]; root of context hierarchy 2017-04-07 22:30:09.768 INFO 5 --- [ main] com.stelligent.BananaApplication : Starting BananaApplication on 7818361f6f45 with PID 5 (/app.jar started by root in /)
Testing the service
Finally, we can get the information about the ELB endpoint in the acceptance environment to test the service:
$ mu env show acceptance Environment: acceptanceCluster Stack: mu-cluster-dev (UPDATE_COMPLETE)VPC Stack: mu-vpc-dev (UPDATE_COMPLETE)Bastion Host:Base URL: http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.comContainer Instances:+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+| EC2 INSTANCE | TYPE | AMI | AZ | CONNECTED | STATUS | # TASKS | CPU AVAIL | MEM AVAIL |+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+| i-093b788b4f39dd14b | t2.micro | ami-62d35c02 | us-west-2a | true | ACTIVE | 3 | 604 | 139 |+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+Services:+----------------+---------------------------------------------------------------------+------------------+---------------------+| SERVICE | IMAGE | STATUS | LAST UPDATE |+----------------+---------------------------------------------------------------------+------------------+---------------------+| banana-service | 324320755747.dkr.ecr.us-west-2.amazonaws.com/banana-service:1f1b09f | CREATE_COMPLETE | 2017-04-07 15:25:43 |+----------------+---------------------------------------------------------------------+------------------+---------------------+
To cleanup the resources that mu created, run the following commands:
$ mu pipeline term
$ mu env term acceptance
$ mu env term production
As you can see, mu addresses infrastructure and engineering overhead costs associated with microservices. It makes deployment of microservices via containers simple and cost-efficient. Additionally, it ensures the deployments are repeatable and non-dramatic by utilizing a continuous delivery pipeline for orchestrating the flow of software changes into production.
In the upcoming posts in this blog series, we will look into:
Test Automation – add test automation to the continuous delivery pipeline with mu
Custom Resources – create custom resources like DynamoDB with mu during our microservice deployment
Service Discovery – use mu to enable service discovery via Consul to allow for inter-service communication
Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack
Until then, head over to stelligent/mu on GitHub and get started. Keep in touch with us in our Gitter room and share your feedback!
Microservices Platform with ECS – Blog post demonstrating the use of ECS for running Microservices. Shows the level of engineering effort required to run microservices on ECS without a tool like mu.
Docker Swarm Mode is the latest entrant in a large field of container orchestration systems. Docker Swarm was originally released as a standalone product that ran master and agent containers on a cluster of servers to orchestrate the deployment of containers. This changed with the release of Docker 1.12 in July of 2016. Docker Swarm Mode is now officially part of docker-engine, and built right into every installation of Docker. Swarm Mode brought many improvements over the standalone Swarm product, including:
Built-in Service Discovery: Docker Swarm originally included drivers to integrate with Consul, etcd or Zookeeper for the purposes of Service Discovery. However, this required the setup of a separate cluster dedicated to service discovery. The Swarm Mode manager nodes now assign a unique DNS name to each service in the cluster, and load balances between the running containers in those services.
Mesh Routing: One of the most unique features of Docker Swarm Mode is Mesh Routing. All of the nodes within a cluster are aware of the location of every container within the cluster via gossip. This means that if a request arrives on a node that is not currently running the service for which that request was intended, the request will be routed to a node that is running a container for that service. This makes it so that nodes don’t have to be purpose built for specific services. Any node can run any service, and every node can be load balanced equally, reducing complexity and the number of resources needed for an application.
Security: Docker Swarm Mode uses TLS encryption for communication between services and nodes by default.
Docker API: Docker Swarm Mode utilizes the same API that every user of Docker is already familiar with. No need to install or learn additional software.
But wait, there’s more! Check out some of the other features at Docker’s Swarm Mode Overview page.
For companies facing increasing complexity in Docker container deployment and management, Docker Swarm Mode provides a convenient, cost-effective, and performant tool to meet those needs.
Creating a Docker Swarm cluster
For the sake of brevity, I won’t reinvent the wheel and go over manual cluster creation here. Instead, I encourage you to follow the fantastic tutorial on Docker’s site.
What I will talk about however is the new Docker for AWS tool that Docker recently released. This is an AWS Cloudformation template that can be used to quickly and easily set up all of the necessary resources for a highly available Docker Swarm cluster, and because it is a Cloudformation template, you can edit the template to add any additional resources, such as Route53 hosted zones or S3 buckets to your application.
One of the very interesting features of this tool is that it dynamically configures the listeners for your Elastic Load Balancer (ELB). Once you deploy a service on Docker Swarm, the built-in management service that is baked into instances launched with Docker for AWS will automatically create a listener for any published ports for your service. When a service is removed, that listener will subsequently be removed.
If you want to create a Docker for AWS stack, read over the list of prerequisites, then click the Launch Stack button below. Keep in mind you may have to pay for any resources you create. If you are deploying Docker for AWS into an older account that still has EC2-Classic, or wish to deploy Docker for AWS into an existing VPC, read the FAQ here for more information.
Deploying a Stack to Docker Swarm
With the release of Docker 1.13 in January of 2017, major enhancements were added to Docker Swarm Mode that greatly improved its ease of use. Docker Swarm Mode now integrates directly with Docker Compose v3 and officially supports the deployment of “stacks” (groups of services) via docker-compose.yml files. With the new properties introduced in Docker Compose v3, it is possible to specify node affinity via tags, rolling update policies, restart policies, and desired scale of containers. The same docker-compose.yml file you would use to test your application locally can now be used to deploy to production. Here is a sample service with some of the new properties:
While most of the properties within this YAML structure will be familiar to anyone used to Docker Compose v2, the deploy property is new to v3. The replicas field indicates the number of containers to run within the service. The update_config field tells the swarm how many containers to update in parallel and how long to wait between updates. The restart_policy field determines when a container should be restarted. Finally, the placement field allows container affinity to be set based on tags or node properties, such as Node Role. When deploying this docker-compose file locally, using docker-compose up, the deploy properties are simply ignored.
SSH into any one of your Manager nodes with the user 'docker' and the EC2 Keypair you specified when you launched the stack.
curl -O https://raw.githubusercontent.com/docker/example-voting-app/master/docker-stack.yml
docker stack deploy -c docker-stack.yml vote
You should now see Docker creating your services, volumes and networks. Now run the following command to view the status of your stack and the services running within it.
docker stack ps vote
You’ll get output similar to this:
This shows the container id, container name, container image, node the container is currently running on, its desired and current state, and any errors that may have occurred. As you can see, the vote_visualizer.1 container failed at run time, so it was shut down and a new container spun up to replace it.
This sample application opens up three ports on your Elastic Load Balancer (ELB): 5000 for the voting interface, 5001 for the real-time vote results interface, and 8080 for the Docker Swarm visualizer. You can find the DNS Name of your ELB by either going to the EC2 Load Balancers page of the AWS console, or viewing your Cloudformation stack Outputs tab in the Cloudformation page of the AWS Console. Here is an example of the Cloudformation Outputs tab:
DefaultDNSTarget is the URL you can use to access your application.
If you access the Visualizer on port 8080, you will see an interface similar to this:
This is a handy tool to see which containers are running, and on which nodes.
Scaling services is as simple as running the command docker service scale SERVICENAME=REPLICAS, for example:
docker service scale vote_vote=3
will scale the vote service to 3 containers, up from 2. Because Docker Swarm uses an overlay network, it is able to run multiple containers of the same service on the same node, allowing you to scale your services as high as your CPU and Memory allocations will allow.
If you make any changes to your docker-compose file, updating your stack is incredibly easy. Simply run the same command you used to create your stack:
docker stack deploy -c docker-stack.yml vote
Docker Swarm will update any services that were changed from the previous version, and adhere to any update_configs specified in the docker-compose file. In the case of the vote service specified above, only one container will be updated at a time, and a 10 second delay will occur once the first container is successfully updated before the second container is updated.
This was just a brief overview of the capabilities of Docker Swarm Mode in Docker 1.13. For further reading, feel free to explore the Docker Swarm Mode and Docker Compose docs. In another post, I’ll be going over some of the advantages and disadvantages of Docker Swarm Mode compared to other container orchestration systems, such as ECS and Kubernetes.
If you have any experiences with Docker Swarm Mode that you would like to share, or have any questions on any of the materials presented here, please leave a comment below!
In this first post of a series exploring containerized CI solutions, I’m going to be addressing the CI tool with the largest market share in the space: Jenkins. Whether you’re already running Jenkins in a more traditional virtualized or bare metal environment, or if you’re using another CI tool entirely, I hope to show you how and why you might want to run your CI environment using Jenkins in Docker, particularly on Amazon EC2 Container Service (ECS). If I’ve done my job right and all goes well, you should have run a successful Jenkins build on ECS well within a half hour from now!
Jenkins is an open source CI tool written in Java. One of its strengths is the very large collection of plugins available, including one for ECS. The Amazon EC2 Container Service Plugin can launch containers on your ECS cluster that automatically register themselves as Jenkins slaves, execute the appropriate Jenkins job on the container, and then automatically remove the container/build slave afterwards.
But before diving into the demo, why would you want to run your CI builds in containers? First, containers are portable, which, especially when also utilizing Docker for your development environment, will give you a great deal of confidence that if your application builds in a Dockerized CI environment, it will build successfully locally and vice-versa. Next, even if you’re not using Docker for your development environment, a containerized CI environment will give you the benefit of an immutable build infrastructure where you can be sure that you’re building your application in a new ephemeral environment each time. And last but certainly not least, provisioning containers is very fast compared to virtual machines, which is something that you will notice immediately if you’re used to spinning up VMs/cloud instances for build slaves like with the Amazon EC2 Plugin.
As for running the Jenkins master on ECS, one benefit is fast recovery if the Jenkins EC2 instance goes down. When using EFS for Jenkins state storage and a multi-AZ ECS cluster like in this demo, the Jenkins master will recover very quickly in the event of an EC2 container instance failure or AZ outage.
Okay, let’s get down to business…
Let’s begin: first launch the provided CloudFormation stack by clicking the button below:
You’ll have to enter these parameters:
AvailabilityZone1: an AZ that your AWS account has access to
AvailabilityZone2: another accessible AZ in the same region as AvailabilityZone1
InstanceType: EC2 instance type for ECS container instances (must be at least t2.small for this demo)
KeyPair: a key pair that will allow you to SSH into the ECS container instances, if necessary
PublicAccessCIDR: a CIDR block that will have access to view the public Jenkins proxy and SSH into container instances (ex: 18.104.22.168/32)
NOTE: Jenkins will not automatically be secured by a user and password, so this parameter can be used to secure your Jenkins master by limiting network access to the provided CIDR block. If you’d like to limit access to Jenkins to only your public IP address, enter “[YOUR_PUBLIC_IP_ADDRESS]/32” here, or if you’d like to allow access to the world (and then possibly secure Jenkins yourself afterwards) enter “0.0.0.0/0“.
Okay, the stack is launching—so what’s going on here?
In a nutshell, this CloudFormation stack provisions a VPC containing a multi-AZ ECS cluster, and a Jenkins ECS service that uses Amazon Elastic File System (Amazon EFS) storage to persist Jenkins data. For ease of use, this CloudFormation stack also contains a basic NGINX reverse proxy that allows you to view Jenkins via a public endpoint. Both Jenkins and NGINX each consist of an ECS service, ECS task definition, and classic ELB (internal for Jenkins, and Internet-facing for the proxy).
In actuality, I think that a lot of organizations would choose to keep Jenkins internal in a private subnet and rely on a VPN for outside access to Jenkins. Instead, to keep things relatively simple, this stack only creates public subnets and relies on security groups for network access control.
There are a couple of reasons why running a Jenkins master on ECS is a bit complicated. One is that there is an ECS limitation that allows you to only associate one load balancer with an ECS service, and Jenkins runs as a single Java application that listens for web traffic on one port and for JNLP connections for build slaves on another port (defaults are 8080 and 50000, respectively). When launching a workload in ECS, using an Elastic Load Balancer for service discovery as I’m doing in this example, and provisioning using CloudFormation, you need to use a Classic Load Balancer that is listening on both Jenkins ports (listening on multiple ports is not currently possible with the recently revealed Application Load Balancer).
Another complication is that Jenkins stores its state in XML on disk, as opposed to some other CI tools that allow you to use an external database to store state (examples coming later in this blog series). This is why I chose to use EFS in this stack—when requiring persistent data in an ECS container, you must be able to sync Docker volumes between your ECS container instances because a container for your service can run on any container instance in the cluster. EFS provides a valuable solution to this issue by allowing you to mount an NFS file system that is shared amongst all the container instances in your cluster.
Depending on how long you took to digest that fancy diagram and my explanation, feel free to grab a cup of coffee; the stack took about 7-8 minutes to complete successfully during my testing. When you see that beautiful CREATE_COMPLETE in the stack status, continue on.
One of the CloudFormation stack outputs is PublicJenkinsURL; navigate to that URL in your browser and you should see the Jenkins home page (at least within a minute, once the instance is in service):
To make things easier, let’s click ENABLE AUTO REFRESH (in the upper-right) right off the bat.
Then click Manage Jenkins > Manage Plugins, navigate to the Available tab, and select these two plugins (you can filter the plugins by each name in the Filter text box):
Amazon EC2 Container Service Plugin
NOTE: there are a number of “Git” plugins, but you’ll want the one that’s just named “Git plugin“
And click Download now and install after restart.
Select the Restart Jenkins when installation is complete and no jobs are running checkbox at the bottom, and Jenkins will restart after the plugins are downloaded.
When Jenkins comes back after restarting, go back to the Jenkins home screen, and navigate to Manage Jenkins > Configure System.
Scroll down to the Cloud section, click Add a new cloud > Amazon EC2 Container Service Cloud, and enter the following configuration (substituting the CloudFormation stack output where indicated):
Amazon ECS Credential: – none – (because we’re using the IAM role of the container instance instead)
Amazon ECS Region Name: us-east-1 (or the region you launched your stack in)
Under Build, click Add build step > Execute shell, and set:
Command: mvn package
That’s it for the Jenkins configuration. Now click Build Now on the left side of the screen.
Under Build History, you’re going to see a “pending – waiting for next available executor” message, which will switch to a progress bar when the ECS container starts. When the progress bar appears (it might take a couple of minutes for the first build while ECS downloads the Docker build slave image, but after this it should only take a few seconds when the image is cached on your ECS container instance), click it and you’ll see the console output for the build:
Okay, Maven is downloading a bunch of dependencies…and more dependencies…and more dependencies…and finally building…and see that “Finished: SUCCESS?” Congratulations, you just ran a build in an ECS Jenkins build slave container!
One thing that you may have noticed is that we used a Docker image provided by CloudBees (the enterprise backers of Jenkins). For your own projects, you might need to build and use a custom build slave Docker image. You’ll probably want to set up a pipeline for each of these Docker builds (and possibly publish to Amazon ECR), and configure an ECS slave template that uses this custom image. One caveat: Jenkins slaves need to have Java installed, which, depending on your build dependencies, may increase the size of your Docker image somewhat significantly (well, relatively so for a Docker image). For reference, check out the Dockerfile of a bare-bones Jenkins build slave provided by the Jenkins project on Docker Hub.
Next Next Steps
Pretty cool, right? Well, while it’s the most popular, Jenkins isn’t the only player in the game—stay tuned for a further exploration and comparison of containerized CI solutions on AWS in this blog series!
Interested in Docker, Jenkins, and/or working someplace where your artful use of monkey GIFs will finally be truly appreciated? Stelligent is hiring!
In my first post on automating the EC2 Container Service (ECS), I described how I automated the provisioning of ECS in AWS CloudFormation using its JSON-based DSL.
In this second and last part of the series, I will demonstrate how to create a deployment pipeline in AWS CodePipeline to deploy changes to ECS Docker images in the EC2 Container Registry (ECR).
In doing this, you’ll not only see how to automate the creation of the infrastructure but also automate the deployment of the application and its infrastructure via Docker containers. This way you can commit infrastructure, application and deployment changes as code to your version-control repository and have these changes automatically deployed to production or production-like environments.
The benefit is the customer responsiveness this embodies: you can deploy new features or fixes to users in minutes, not days or weeks.
In the figure below, you see the high-level architecture for the deployment pipeline
With the exception of the CodeCommit repository creation, most of the architecture is implemented in a CloudFormation template. Some of this is the result of not requiring a traditional configuration management tool to perform configuration on compute instances.
CodePipeline is a Continuous Delivery service that enables you to orchestrate every step of your software delivery process in a workflow that consists of a series of stages and actions. These actions perform the steps of your software delivery process.
In CodePipeline, I’ve defined two stages: Source and Build. The Source stage retrieves code artifacts via a CodeCommit repository whenever someone commits a new change. This initiates the pipeline. CodePipeline is integrated with the Jenkins Continuous Integration server. The Build stage updates the ECS Docker image (which runs a small PHP web application) within ECR and makes the new application available through an ELB endpoint.
Jenkins is installed and configured on an Amazon EC2 instance within an Amazon Virtual Private Cloud (VPC). The CloudFormation template runs commands to install and configure the Jenkins server, install and configure Docker, install and configure the CodePipeline plugin and configure the job that’s run as part of the CodePipeline build action. The Jenkins job is configured to run a bash script that’s committed to the CodeCommit repository. This bash script updates the ECS service and task definition by running a Docker build, tag and push to the ECR repository. I describe the implementation of this architecture in more detail in this post.
In this example, CodePipeline manages the orchestration of the software delivery workflow. Since CodePipeline doesn’t actually execute the actions, you need to integrate it with an execution platform. To perform the execution of the actions, I’m using the Jenkins Continuous Integration server. I’ll configure a CodePipeline plugin for Jenkins so that Jenkins executes certain CodePipeline actions.
In particular, I have an action to update an ECS service. I do this by running a CloudFormation update on the stack. CloudFormation looks for any differences in the templates and applies those changes to the existing stack.
To orchestrate and execute this CloudFormation update, I configure a CodePipeline custom action that calls a Jenkins job. In this Jenkins job, I call a shell script passing several arguments.
Provision Jenkins in CloudFormation
In the CloudFormation template, I create an EC2 instance on which I will install and configure the Jenkins server. This CloudFormation script is based on the CodePipeline starter kit.
To launch a Jenkins server in CloudFormation, you will use the AWS::EC2::Instance resource. Before doing this, you’ll be creating an IAM role and an EC2 security group to the already provisioned VPC (the VPC provisioning is part of the CloudFormation script).
Within the Metadata attribute of the resource (i.e. the EC2 instance on which Jenkins will run), you use the AWS::CloudFormation::Init resource to define the user data configuration. To apply your changes, you call cfn-init to run commands on the EC2 instance like this:
In the config-template.xml, I added tokens that get replaced as part of the commands run from the CloudFormation template. You can see a snippet of this below in which the command for the Jenkins job makes a call to the configure-ecs.sh bash script with some tokenized parameters.
All of the commands for installing and configuring the Jenkins Server, Docker, the CodePipeline plugin and Jenkins jobs are described in the CloudFormation template that is hosted in the version-control repository.
Jenkins Job Configuration Template
In the previous code snippets from CloudFormation, you see that I’m using sed to update a file called config-template.xml. This is a Jenkins job configuration file for which I’m updating some token variables with dynamic information that gets passed to it from CloudFormation. This information is used to run a bash script to update the CloudFormation stack – which is described in the next section.
ECS Service Script to Update CloudFormation Stack
The code snippet below shows how the bash script captures that arguments that are passed by the Jenkins job into bash variables. Later in the script, it uses these bash variables to make a call the update-stack command in the CloudFormation API to apply a new ECS Docker image to the endpoint.
In the code snippet below of the configure-ecs.sh script, I’m building, tagging and pushing to the Docker repository in my EC2 Container Registry repository using the dynamic values passed to this script from Jenkins (which were initially passed from the parameters and resources of my CloudFormation script).
In doing this, it creates a new Docker image for each commit and tags it with a unique id based on date and time. Finally, it uses the AWS CLI to call the update-stack command of the CloudFormation API using the variable information.
Now that you see the basics of install and configuring Jenkins in CloudFormation and what happens when the Jenkins is run through the CodePipeline orchestration, let’s look at the steps for configuring the CodePipeline part of the CodePipeline/Jenkins configuration.
Create a Pipeline using AWS CodePipeline
Before I create a working pipeline, I prefer to model the stages and actions in CodePipeline using Lambda so that I can think through the workflow. To do this I refer to my blog post on Mocking AWS CodePipeline pipelines with Lambda. I’m going to create a two-stage pipeline consisting of a Source and a Build stage. These stages and the actions in these stages are described in more detail below.
Define a Custom Action
There are five types of action categories in CodePipeline: Source, Build, Deploy, Invoke and Test. Each action has four attributes: category, owner, provider and version. There are three types of action owners: AWS, ThirdParty and Custom. AWS refers to built-in actions provided by AWS. Currently, there are four built-in action providers from AWS: S3, CodeCommit, CodeDeploy and ElasticBeanstalk. Examples of ThirdParty action providers include RunScope and GitHub. If none of the action providers suit your needs, you can define custom actions in CodePipeline. In my case, I wanted to run a script from a Jenkins job so I used the CloudFormation sample configuration from the CodePipeline starter kit for the configuration of the custom build action that I use to integrate Jenkins with CodePipeline. See the snippet below.
The example pipeline that I’ve defined in CodePipeline (and described as code in CloudFormation) uses the above custom action in the Build stage of the pipeline, which is described in more detail in the Build Stage section later.
The Source stage has a single action to look for any changes to a CodeCommit repository. If it discovers any new commits, it retrieves the the artifacts from the CodeCommit and stores them in an encrypted form in an S3 bucket. If it’s successful, it transitions to the next stage: Build. A snippet from the CodePipeline resource definition for the Source stage in CloudFormation is shown below.
The Build stage invokes actions to create a new ECS repository if one doesn’t exist, builds and tags a Docker image and makes a call to a CloudFormation template to launch the rest of the ECS environment – including creating an ECS cluster, task definition, ECS services, ELB, Security Groups and IAM resources. It does this using the custom CodePipeline action for Jenkins that I described earlier. A snippet from the CodePipeline resource definition in CloudFormation for the Build stage is shown below.
The custom action for Jenkins (via the CodePipeline plugin) is looking for work from CodePipeline. When it finds work, it performs the task associated with the CodePipeline action. In this case, it runs the Jenkins job that calls the configure-ecs.sh script. This bash script makes a update-stack call to the original CloudFormation template passing in the new image via the ImageTag parameter which is the new tag generated for the Docker image created as part of this script.
CloudFormation seeks to run the minimum necessary changes to the infrastructure based on the stack update. In this case, I’m only providing a new image tag but this results in creating a new ECS task definition for the service. In your CloudFormation events console, you’ll see a message similar to the one below:
AWS::ECS::TaskDefinition Requested update requires the creation of a new physical resource; hence creating one.
As I mentioned in part 1 of this series, I defined a DeploymentConfiguration type with a MinimumHealthyPercent property of 0 since I’m only using one EC2 instance as running through the earlier stages of the pipeline. This means the application experiences a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I’d increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.
In the example I provided, I stop at the Build stage. If you were to take this to production, you might include other stages as well. Perhaps you might have a “Staging” stage in which you might include actions to deploy the application to the ECS containers using a production-like configuration which might include more instances in the Auto Scaling Group.
Once Staging is complete, the pipeline would automatically transition to the Production stage where it might make Lambda calls to test the application running in ECS containers. If everything looks ok, it switches the Route 53 hosted zone endpoint to the new container.
Launch the ECS Stack and Pipeline
In this section, you’ll launch the CloudFormation stack that creates the ECS and Pipeline resources.
You need to have already created an ECR repository and a CodeCommit repository to successfully launch this stack. For instructions on creating an ECR repository, see part 1 of this series (or to directly launch the CloudFormation stack to create this ECR repository, click this button: .) For creating a CodeCommit repository, you can either see part 1 or use the instructions described at: Create and Connect to an AWS CodeCommit Repository.
Launch the Stack
Click the button below to launch a CloudFormation stack that provisions the ECS environment including all the resources previously described such as CodePipeline, ECS Cluster, ECS Task Definition, ECS Service, ELB, VPC resources, IAM Roles, etc.
You’ll enter values for the following parameters: RepositoryName, YourIP, KeyName, and ECSRepoName.
To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):
Once the CloudFormation stack successfully launches, there are several outputs but the two most relevant are AppURL and CodePipelineURL. You can click on the AppURL value to launch the PHP application running on ECS from the ELB endpoint. The CodePipelineURL output value launches the generated pipeline from the CodePipeline console. See the screenshot below.
Access the Application
Once the stack successfully completes, go to the Outputs tab for the CloudFormation stack and click on the AppURL value to launch the application.
Commit Changes to CodeCommit
Make some visual changes to the code and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.
git commit -am "change color to pink"
Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser.
While the solution can work “straight out of the box”, if you’d like to make some changes, I’ve included a few sections of the code that you’ll need to modify.
The purpose of the configure-ecs.sh Bash script is to run the Docker commands to build, tag and push the image along with updating the existing CloudFormation stack to update the ECS service and task. The source for this bash script is here: https://github.com/stelligent/cloudformation_templates/blob/master/labs/ecs/configure-ecs.sh. I hard coded the ecs_template_url variable to a specific S3 location. You can either download the source file from one of these two locations: GitHub or S3 to make your desired modifications and then modify the ecs_template_url variable to the new location (presumably in S3).
The purpose of the config-template.xml file is the Jenkins job configuration for the update ECS action. This XML file contains tokens that get replaced from the ecs-pipeline.json CloudFormation template with dynamic information like the CloudFormation stack name, account id, etc. This XML file is obtained via a wget command from within the template. The file is stored in S3 at https://s3.amazonaws.com/stelligent-training-public/public/jenkins/config-template.xml so you can modify the S3 location to your account while updating the CloudFormation template to point to the new location. In doing this, you can modify any of the behavior of the updates to the file when used by Jenkins.
In this series, you learned how to use CloudFormation to fully automate the provisioning of the Elastic Container Service along with a CodePipeline pipeline that uses CodeCommit as its version-control repository so that whenever a change is made to the Git repo, the changes are automatically applied to a PHP application hosted on ECS images.
By modeling your pipeline in CodePipeline you can apply even more stages and actions as part of your Continuous Delivery process so that it runs through all the tests and other checks enabling you to deliver changes to the production whenever there’s a business need to do so.
Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.
The sample solution currently only works in the us-east-1 AWS region. You will be charged for your AWS usage – including EC2, S3, CodePipeline and other services.
Here’s a list of some of the resources described or were influenced in this post:
In this two-part series, you’ll learn how to provision, configure, and orchestrate the EC2 Container Service (ECS) applications into a deployment pipeline that’s capable of deploying new infrastructure and code changes when developers commit changes to a version-control repository so that team members can release new changes to users whenever they choose to do so: Continuous Delivery.
While the primary AWS service described in this solution is ECS, I’ll also be covering the various components and services that support this solution including AWS CloudFormation, EC2 Container Registry (ECR), Docker, Identity and Access Management (IAM), VPC and Auto Scaling Services – to name a few. In part 2, I’ll be covering the integration of CodePipeline, Jenkins and CodeCommit in greater detail.
ECS allows you to run Docker containers on Amazon. The benefits of ECS and Docker include the following:
Portability – You can build on one Linux operating system and have it work on others without modification. It’s also portable across environment types so you can build it in development and use the same image in production.
Scalability – You can run multiple images on the same EC2 instance to scale thousands of tasks across a cluster.
Speed – Increase your speed of development and speed of runtime execution.
“ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.” 
The reason you might use Docker-based containers over traditional virtual machine-based application deployments is that it allows a faster, more flexible, and still very robust immutable deployment pattern in comparison with services such as traditional Elastic Beanstalk, OpsWorks, or native EC2 instances.
While you can very effectively integrate Docker into Elastic Beanstalk, ECS provides greater overall flexibility.
The reason you might use ECS or Elastic Beanstalk containers with EC2 Container Registry over similar offerings such as Docker Hub or Docker Trusted Registry is higher performance, better availability, and lower pricing. In addition, ECR utilizes other AWS services such as IAM and S3, allowing you to compose more secure or robust patterns to meet your needs.
Based on the current implementation of Lambda, the reasons you might choose to utilize ECS instead of serverless architectures include:
Lower latency in request response time
Flexibility in the underlying language stack to use
Elimination of AWS Lambda service limits (requests per second, code size, total code runtime)
Greater control of the application runtime environment
The ability to link modules in ways not possible with Lambda functions
I’ll be using a sample PHP application provided by AWS to demonstrate Continuous Delivery pipeline using ECS, CloudFormation and, in part 2, AWS CodePipeline.
Create and Connect to a CodeCommit Repository
While you can store your application code in any version-control repository, in this example, I’ll be using the AWS CodeCommit Git repository. I’ll be integrating CodeCommit with CodePipeline. I’m basing the code from the Amazon ECS PHP Simple Demo App located at https://github.com/awslabs/ecs-demo-php-simple-app.
To create your own CodeCommit repo, follow these instructions: Create and Connect to an AWS CodeCommit Repository. Take note of the repository name as you’ll be using it as a CloudFormation user parameter in part 2. I called my CodeCommit repository ecs-demo. You can call it the same but if you do name it something different, be sure to replace the samples with your repo name.
After you create your CodeCommit repo, copy the contents from the AWS PHP ECS Demo app and commit all of the files.
CodeCommit provides the following features and benefits:
Highly available, Secure and Private Git repositories
Use your existing Git tools
Automatically encrypts all files in transit and at rest
Provides Webhooks – to trigger Lambda functions or push notifications in response to events
Integrated with other AWS services like IAM so you can define user-specific permissions
Create a Private Image Repository in ECS using ECR
You can create private Docker repositories using ECS Repositories (ECR) to store your Docker images. Follow these instructions to manually create an ECR: Create a Repository.
A snippet of the CloudFormation template for provisioning an ECR repo is listed below.
In defining an ECR, you can securely store your Docker images and refer to them when building, tagging and pushing these Docker images.
To launch the CloudFormation stack to create an ECR repository, click this button: . Your IAM username is a parameter to this CloudFormation template. You only need to enter the IAM username (and not the entire ARN) as the input value. Make note of the ECSRepository Output from the stack as you’ll be using this as an input to the ECS Environment Stack in part 2.
“Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.”  In this demonstration, you’ll build, tag and push a PHP application as a Docker image into an ECR repository.
Build Docker Image and Upload to ECR Locally
You’re running these commands from an Amazon Linux EC2 instance. If you’re not, you’ll need to adapt the instructions according to your OS flavor.
You’ve created an ECR repo (see the “Create a Private Image Repository in ECS using ECR” section above)
You’ve created a CodeCommit repository and committed the PHP code from the AWS PHP app in GitHub (see the “Create and Connect to a CodeCommit Repository” section above)
Install Docker on an Amazon Linux EC2 instance for which your AWS CLI has been configured (you can find detailed instructions at Install Docker)
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
Logout and log back in and type:
sudo yum -y install git*
Clone the ECS PHP example application (if you used a different repo name, be sure to update the sample command here):
Configure your AWS account by running the command below and following the prompts to enter your credentials, region and output format.
Run the command below to login to ECR.
eval $(aws --region us-east-1 ecr get-login)
Build the image using Docker. Replace REPOSITORY_NAME with the ECSRepository Output from the ECR stack you launched and TAG with a unique value. Make note of the name the image tag you’re using in creating the Docker image as you’ll be using it as a input parameter to a CloudFormation stack later. If you want to use the default value, just name it latest.
docker build -t REPOSITORY_NAME:TAG .
Tag the image (replace REPOSITORY_NAME, TAG and AWS_ACCOUNT_ID):
docker tag REPOSITORY_NAME:TAG AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/REPOSITORY_NAME:TAG
Push the tagged image to ECR (replace REPOSITORY_NAME, AWS_ACCOUNT_ID and TAG):
Verify the image was uploaded to your ECS Repository by going to your AWS ECS Console, clicking on Repositories and selecting the repository you created when you launched the ECS Stack.
“A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.”  The snippet you see below is the Dockerfile to run the PHP sample application. You can see that it runs OS updates, installs the required packages including apache and PHP and then configures the HTTP server and port. While these are types of steps you might run in any automated build and deployment script, the difference is that it’s running these steps within a container which means that it runs very quickly, you can run these same steps across operating systems, and you can run these procedures across multiple tasks in a cluster.
# Install dependencies
RUN apt-get update -y
RUN apt-get install -y git curl apache2 php5 libapache2-mod-php5 php5-mcrypt php5-mysql
# Install app
RUN rm -rf /var/www/*
ADD src /var/www
# Configure apache
RUN a2enmod rewrite
RUN chown -R www-data:www-data /var/www
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
This Dockerfile gets run when you run the docker build command. This file has been committed to my CodeCommit repo as you can see in the figure below.
Create an ECS Environment in CloudFormation
In this section, I’m describing the how to configure the entire ECS stack in CloudFormation. This includes the architecture, its dependencies, and the key CloudFormation resources that make up the stack.
The overall solution architecture is illustrated in the CloudFormation diagram below.
Auto Scaling Group – I’m using an auto scaling group to scale the underlying EC2 infrastructure in the ECS Cluster. It’s used in conjunction with the Launch Configuration.
Auto Scaling Launch Configuration – I’m using a launch configuration to scale the underlying EC2 infrastructure in the ECS Cluster. It’s used in conjunction with the Auto Scaling Group.
CodeCommit – I’m using CodeCommit as my Git repo to store the application and infrastructure code.
CodePipeline – CodePipeline describes my Continuous Delivery workflow. In particular, it integrates with CodeCommit and Jenkins to run actions every time someone commits new code to the CodeCommit repo. This will be covered in more detail in part 2.
ECS Cluster – “An ECS cluster is a logical grouping of container instances that you can place tasks on.”
ECS Service – With an ECS service, you can run a specific number of instances of a task definition simultaneously in an ECS cluster 
ECS Task Definition – A task definition is the core resource within ECS. This is where you define which Docker images to run, CPU/Memory, ports, commands and so on. Everything else in ECS is based upon the task definition
Elastic Load Balancer – The ELB provides the endpoint for the application. The ELB dynamically determines which EC2 instance in the cluster is serving the running ECS tasks at any given time.
IAM Instance Profile – “An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.”  In the sample, I’m using the instance profile to define the roles for which launch configurations use as part of the underlying EC2 instance that the ECS cluster runs
IAM Roles – I’m describing roles that have access to certain AWS resources for the EC2 instances (for ECS), Jenkins and CodePipeline
Jenkins – I’m using Jenkins to execute the actions that I’ve defined in CodePipeline. For example, I have a bash script that updates the CloudFormation stack when an ECS Service is update. This action is orchestrated via CodePipeline and then executed on te Jenkins server on one of its configured jobs. This will be covered in more detail in part 2.
Virtual Private Cloud (VPC) – In the CloudFormation template, I’m using a VPC template that we developed to define VPC resources such as: VPCGatewayAttachment, SecurityGroup, SecurityGroupIngress, SecurityGroupEgress, SubnetNetworkAclAssociation, NetworkAclEntry, NetworkAcl, SubnetRouteTableAssociation, Route, RouteTable, InternetGateway, and Subnet
There are four core dependencies in this solution: EC2 Key Pair, CodeCommit Repo, a VPC, and an ECR repo and Docker Image
CodeCommit – In this demo, I’m using an AWS CodeCommit Git repo to store the PHP application code along with my Docker configuration. See the instructions for configuring a Git repo in CodeCommit above
VPC – This template requires an existing AWS Virtual Private Cloud has been created
ECR repo and image – You should have created an E2 Container Service Repository (ECR) using the CloudFormation template from the previous section. You should have also built, tagged and pushed a Docker image to ECR using the instructions described at Create a Private Image Repository in ECS using ECR above
With an ECS Cluster, you can manage multiple services. An ECS Container Instance runs an ECS agent that is registered to the ECS Cluster. To define an ECS Cluster in CloudFormation, use the Cluster resource: AWS::ECS::Cluster as shown below.
Notice that I defined a DeploymentConfiguration with a MinimumHealthyPercent of 0. Since I’m only using one EC2 instance in development, the ECS service would fail during a CloudFormation update so by setting the MinimumHealthyPercent to zero, the application will experience a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I’d increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.
With an ECS Task Definition, you can define multiple Container Definitions and volumes. With a Container Definition, you define port mappings, environment variables, CPU Units and Memory. An ECS Volume is a persistent volume to mount and map to container volumes.
In this first part of the series, you learned how to use CloudFormation to fully automate the provisioning of the EC2 Container Service and Docker which includes ELB, Auto Scaling, and VPC resources. You also learned how to setup a CodeCommit repository.
In the next and last part of this series, you’ll learn how to orchestrate all of the changes into a deployment pipeline to achieve Continuous Delivery using CodePipeline and Jenkins so that any change made to the CodeCommit repo can be deployed to production in an automated fashion. I’ll provide access to all the code resources in part 2 of this series. Let us know if you have any comments or questions @stelligent or @paulduvall.
Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.
Here’s a list of some of the resources described in this post:
My colleague Jeff Bachtel provided the thoughts on reasons why some teams might choose to use Docker and ECS over serverless. I also used several resources from AWS including the PHP sample app, the Introduction to AWS CodeCommit video, the CodePipeline Starter Kit and the ECS CloudFormation snippets.