Service discovery for microservices with mu

mu is a tool that makes it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this fourth post of the blog series focused on the mu tool, we will use mu to setup Consul for service discovery between multiple microservices.  

Why do I need service discovery?

One of the biggest benefits of a microservices architecture is that the services can be deployed independently of one another.  However, this presents a new challenge in that it becomes difficult for clients to know the list of containers to use when invoking the service.  Here are three different approaches to address this challenge:

  • Load balancer per microservice: Create a load balancer for every microservice and add/remove containers to the load balancer as deployments and scaling events occur.  The endpoint address of the load balancer is then shared with clients through some manual process.

cloudcraft - Microservices - multip.png

There are three concerns with this approach.  First, the endpoint address of the load balancer must never change or else all the clients will be broken and require updates to take the new endpoint address.  This can be addressed via DNS CNAME records, but still requires that the name chosen for the record must not change.  Second, there is the additional cost of a load balancer for every microservice.  Finally, there is additional latency introduced with adding a load balancer between each microservice invocation.

  • Shared load balancer: Create a load balancer that is shared by all microservices in an environment.  The load balancer must have rules for each microservice to route requests by URI patterns.

ms-architecture-3

The concern with this approach is that all traffic is now flowing through a single load balancer which can become a constraint in scaling the entire system.  Additionally, the load balancer becomes a shared resource amongst all the microservice teams, potentially impacting a team’s ability to operate independently of other teams.

  • Client load balancer: Load balancing from within the client is an approach in which the client has an awareness of all the containers in-service for a given microservice.  The client can then load balance between the containers when invoking the microservice.  This approach requires a system to provide service registration and service discovery.   

cloudcraft - mu-bananaservice-v3

The benefit with this approach is there are no longer load balancers between each microservice request so all the concerns with those prior approaches are addressed.  However, a new type microservice, an edge service, will need to be deployed to allow clients outside the microservice environment (that do not have access to service discovery) to invoke the service.

The preferred approach is the third approach which uses service discovery and client side load balancing within the microservice environment and edge routing with traditional load balancing for clients outside the microservice environment.  This approach provides the lowest latency and most loosely coupled solution for microservice invocation.

Let mu help!

The environment that mu creates for your microservice can manage the provisioning of Consul for service discovery and registration of your microservices.  Consul is a sort of phonebook for microservices.  It provides APIs for services to register their endpoints and for clients to lookup the endpoints.

Let’s demonstrate this by adding an additional milkshake service to the invoke the banana service from the first post.  Additionally, we will create a zuul router service to provide an edge service via Netflix’s Zuul.  Zuul is a proxy service that serves as the front door for all requests from outside the microservice environment.  Zuul will use Consul for service discovery to determine where best to route the incoming request.  Additionally, Zuul provides an excellent location to enforce policies such as authentication, authorization or logging on all incoming requests.

Enabling Consul and Edge Router

The first thing we will want to do is set up our edge router with Zuul.  This is just a matter of adding the @EnableZuulProxy and @EnableDiscoveryClient annotations to the Spring Boot application:

@SpringBootApplication
@EnableDiscoveryClient
@EnableZuulProxy
public class ZuulRouterApplication {

   public static void main(String[] args) {
     SpringApplication.run(ZuulRouterApplication.class, args);
   }
}

Zuul is configured via the application.yml file in src/main/resources.  For each service that we want exposed via the edge router, we add URI path patterns:

spring:
  application:
    name: zuul-router
zuul:
  routes:
    milkshake-service:
      path: /milkshakes/**
      stripPrefix: false
    banana-service:
      path: /bananas/**
      stripPrefix: false

In order to enable Consul in your environment, you need to update the environment definition in the mu.yml file.  Additionally, you need to configure Spring Cloud Consul to connect to the docker host ip address for service discovery.  We will also want to configure Spring Cloud to not register with Consul, since mu will already configure the Registrator agent on your ECS container instances:

 environments:
 - name: acceptance
   cluster:
     maxSize: 5
   discovery:
     provider: consul
 - name: production

service:
  name: zuul-router
  port: 8080
  pathPatterns:
  - /*
  environment:
    SPRING_CLOUD_CONSUL_HOST: 172.17.0.1
    SPRING_CLOUD_CONSUL_DISCOVERY_REGISTER: 'false'
  pipeline:
    source:
      provider: GitHub
      repo: cplee/zuul-router
    build:
      image: aws/codebuild/java:openjdk-8

Create Milkshake Service

Now we can create a new service to manage the creation of milkshakes.  The service looks very similar to the banana service, with the exception of declaring a Spring RestTemplate annotated with @LoadBalanced to enable client side loadbalancing via Ribbon.

 

@SpringBootApplication
@EnableDiscoveryClient
public class MilkshakeApplication {

  @LoadBalanced
  @Bean
  RestTemplate restTemplate(){
     return new RestTemplate();
  }
}

Now we can use the RestTemplate to make calls directly to the banana service.  Ribbon will do a lookup in Consul for a service named banana-service and replace it in the URL with one of the container’s IP and port:

@Component
public class BananaProvider implements FlavorProvider {

  @Autowired
  private RestTemplate restTemplate;

  private List<Map<String,Object>> getAll() {
    ParameterizedTypeReference<List<Map<String, Object>>> typeRef =
            new ParameterizedTypeReference<List<Map<String, Object>>>() {};

    ResponseEntity<List<Map<String, Object>>> exchange =
            this.restTemplate.exchange("http://banana-service/bananas",HttpMethod.GET,null, typeRef);

    return exchange.getBody();
  } 

Try it out!

After we have deployed all three services, we can use mu to confirm that all are running as expected.

~ ❯❯❯ mu env show acceptance                                                                                                                                                                                                       

Environment:    acceptance
Cluster Stack:  mu-cluster-dev (UPDATE_COMPLETE)
VPC Stack:      mu-vpc-dev (UPDATE_COMPLETE)
Bastion Host:   35.164.117.25
Base URL:       http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com

Container Instances:
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
|    EC2 INSTANCE     |   TYPE   |     AMI      |     AZ     | CONNECTED | STATUS | # TASKS | CPU AVAIL | MEM AVAIL |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
| i-08e3edc8c644f0534 | t2.micro | ami-62d35c02 | us-west-2b | true      | ACTIVE |       3 |       604 |       139 |
| i-05bc14a67e53889e1 | t2.micro | ami-62d35c02 | us-west-2a | true      | ACTIVE |       3 |       604 |       139 |
| i-0b56a0d9572531e9e | t2.micro | ami-62d35c02 | us-west-2c | true      | ACTIVE |       3 |       604 |       139 |
| i-05b2188a5c575fbeb | t2.micro | ami-62d35c02 | us-west-2b | true      | ACTIVE |       1 |       624 |       739 |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+

Services:
+-------------------+---------------------------+------------------+---------------------+
|      SERVICE      |         IMAGE             |      STATUS      |     LAST UPDATE     |
+-------------------+---------------------------+------------------+---------------------+
| milkshake-service | milkshake-service:9e4bcd9 | CREATE_COMPLETE  | 2017-05-12 11:33:05 |
| zuul-router       | zuul-router:3d4795c       | UPDATE_COMPLETE  | 2017-05-12 12:09:47 | 
| banana-service    | banana-service:3b62124    | UPDATE_COMPLETE  | 2017-05-12 11:32:55 |
+-------------------+---------------------------+------------------+---------------------+

We can then use curl to get a list of all the bananas available via the banana-service:

curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq
[
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/9"
      }
    ]
  }
]

Next we try to create a milkshake using the milkshake-service:

~ ❯❯❯ curl -s -d "{}" -H "Content-Type: application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/milkshakes\?flavor\=Banana | jq                                                                         
{
  "timestamp": "2017-05-15T19:12:56.640+0000",
  "status": 500,
  "error": "Internal Server Error",
  "exception": "org.springframework.web.client.HttpClientErrorException",
  "message": "429 Not enough bananas to make the shake.",
  "path": "/milkshakes"
}

Looks like there aren’t enough bananas to create a milkshake.  Let’s create another one:

~ ❯❯❯ curl -s -d "{}" -H "Content-Type: application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas

~ ❯❯❯ curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq                                                                                                                         
[
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/9"
      }
    ]
  },
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/10"
      }
    ]
  }
]

Now let’s try again creating a milkshake:

~ ❯❯❯ curl -s -d "{}" -H "application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/milkshakes\?flavor\=Banana | jq                                                                      
{
  "id": 3,
  "flavor": "Banana"
}

This time it worked, and if we query the list of bananas again, we see that 2 have been deleted for the milkshake:

~ ❯❯❯ curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq                                                                                                                        
[]

Conclusion

Decomposing a monolithic application into microservices presents an interesting challenge in enabling services to invoke one another while still keeping the services loosely coupled.  Using a client side load balancer like Ribbon along with a service discovery tool like Consul provide an excellent solution to this challenge.  As demonstrated in this post, mu makes it simple to enable service discovery in your microservice environment to help achieve this solution.  Head over to stelligent/mu on GitHub and get started!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Introducing mu: a tool for managing your microservices in AWS

mu is a tool that Stelligent has created to make it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this first post of the blog series focused on the mu tool, we will be introducing the motivation for the tool and demonstrating the deployment of a microservice with it.  

Why microservices?

The architectural pattern of decomposing an application into microservices has proven extremely effective at increasing an organization’s ability to deliver software faster.  This is due to the fact that microservices are independently deployable components that are decoupled from other components and highly cohesive around a single business capability.  Those attributes of a microservice yield smaller team sizes that are able to operate with a high level of autonomy to deliver what the business wants at the pace the market demands.

What’s the catch?

When teams begin their journey with microservices, they usually face cost duplication on two fronts:  infrastructure and re-engineering. The first duplication cost is found in the “infrastructure overhead” used to support the microservice deployment.  For example, if you are deploying your microservices on AWS EC2 instances, then for each microservice, you need a cluster of EC2 instances to ensure adequate capacity and tolerance to failures.  If a single microservice requires 12 t2.small instances to meet capacity requirements and we want to be able to survive an outage in 1 out of 4 availability zones, then we would need to run 16 instances total, 4 per availability zone.  This leaves an overhead cost of 4 t2.small instances.  Then multiply this cost by the number of microservices for a given application and it is easy to see that the overhead cost of microservices deployed in this manner can add up quickly.

Containers to the rescue!

An approach to addressing this challenge of overhead costs is to use containers for deploying microservices.  Each microservice would be deployed as a series of containers to a cluster of hosts that is shared by all microservices.  This allows for greater density of microservices on EC2 instances and allows the overhead to be shared by all microservices.  Amazon ECS (EC2 Container Service) provides an excellent platform for deploying microservices as containers.  ECS leverages many AWS services to provide a robust container management solution.  Additionally, a developer can use tools like CodeBuild and CodePipeline to create continuous delivery pipelines for their microservices.

That sounds complicated…

This approach leads to the second duplication cost of microservices: the cost of “reengineering”.  There is a significant learning curve for developers to learn how to use all these different AWS resources to deploy their microservices in an efficient manner.  If each team is using their autonomy to engineer a platform on AWS for their microservices then a significant level of engineering effort is being duplicated.  This duplication not only causes additional engineering costs, but also impedes a team’s ability to deliver the differentiating business capabilities that they were commissioned to do in the first place.

Let mu help!

To address these challenges, mu was created to simplify the declaration and administration of the AWS resources necessary to support microservices.  mu is a tool that a developer uses from their workstation to deploy their microservices to AWS quickly and efficiently as containers.  It codifies best practices for microservices, containers and continuous delivery pipelines into the AWS resources it creates on your behalf.  It does this from a simple CLI application that can be installed on the developer’s workstation in seconds.  Similar to how the Serverless Framework improved the developer experience of Lambda and API Gateway, this tool makes it easier for developers to use ECS as a microservices platform.

Additionally, mu does not require any servers, databases or other AWS resources to support itself.  All state information is managed via CloudFormation stacks.  It will only create resources (via CloudFormation) necessary to run your microservices.  This means at any point you can stop using mu and continue to manage the AWS resources that it created via AWS tools such as the CLI or the console.

Core components

The mu tool consists of three main components:

  • Environments – an environment includes a shared network (VPC) and cluster of hosts (ECS and EC2 instances) necessary to run microservices as clusters.  The environments include the ability to automatically scale out or scale in based on resource requirements across all the microservices that are deployed to it.  Many environments can exist (e.g. development, staging, production)
  • Services – a microservice that will be deployed to a given environment (or environments) as a set of containers.
  • Pipeline – a continuous delivery pipeline that will manage the building, testing, and deploying of a microservice in the various environments.

mu-architecture

Installing and configuring mu

First let’s install mu:

$ curl -s http://getmu.io/install.sh | sh

If you’re appalled at the idea of curl | bash installers, then you can always just download the latest version directly.

mu will use the same mechanism as aws-cli to authenticate with the AWS services.  If you haven’t configured your AWS credentials yet, the easiest way to configure them is to install the aws-cli and then follow the aws configure instructions:

$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

Setup your microservice

In order for mu to setup a continuous delivery pipeline for your microservice, you’ll need to run mu from within a git repo.  For this demo, we’ll be using the stelligent/banana-service repo for our microservice.  If you want to follow along and try this on your own, you’ll want to fork the repo and clone your fork.

Let’s begin with cloning the microservice repo:

$ git clone git@github.com:myuser/banana-service.git
$ cd banana-service

Next, we will initialize mu configuration for our microservice:

$ mu init --env
Writing config to '/Users/casey.lee/Dev/mu/banana-service/mu.yml'
Writing buildspec to '/Users/casey.lee/Dev/mu/banana-service/buildspec.yml'

We need to update the mu.yml that was generated with the URL paths that we want to route to this microservice and the CodeBuild image to use:

environments:
- name: acceptance
- name: production
service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  pipeline:
    source:
      provider: GitHub
      repo: myuser/banana-service
    build:
      image: aws/codebuild/java:openjdk-8

Next, we need to update the generated buildspec.yml to include the gradle build command:

version: 0.1
phases:
  build:
    commands:
      - gradle build
artifacts:
  files:
    - '**/*'

Finally, commit and push our changes:

$ git add --all && git commit -m "mu init" && git push

Create the pipeline

Make sure you have GitHub token with repo and admin:repo_hook scopes to provide to the pipeline in order to integrate with your GitHub repo.  Then you can create the pipeline:

$ mu pipeline up
Upserting Bucket for CodePipeline
Upserting Pipeline for service 'banana-service' ...
  GitHub token: XXXXXXXXXXXXXXX

Now that the pipeline is created, it will build and deploy for every commit to your git repo.  You can monitor the status of the pipeline as it builds and deploys the microservice:

$ mu svc show

Pipeline URL:   https://console.aws.amazon.com/codepipeline/home?region=us-west-2#/view/mu-pipeline-banana-service-Pipeline-1B3A94CZR6WH
+------------+----------+------------------------------------------+-------------+---------------------+
|   STAGE    |  ACTION  |                 REVISION                 |   STATUS    |     LAST UPDATE     |
+------------+----------+------------------------------------------+-------------+---------------------+
| Source     | Source   | 1f1b09f0bbc3f42170b8d32c68baf683f1e3f801 | Succeeded   | 2017-04-07 15:12:35 |
| Build      | Artifact |                                        - | Succeeded   | 2017-04-07 15:14:49 |
| Build      | Image    |                                        - | Succeeded   | 2017-04-07 15:19:02 |
| Acceptance | Deploy   |                                        - | InProgress  | 2017-04-07 15:19:07 |
| Acceptance | Test     |                                        - | -           |                   - |
| Production | Approve  |                                        - | -           |                   - |
| Production | Deploy   |                                        - | -           |                   - |
| Production | Test     |                                        - | -           |                   - |
+------------+----------+------------------------------------------+-------------+---------------------+

Deployments:
+-------------+-------+-------+--------+-------------+------------+
| ENVIRONMENT | STACK | IMAGE | STATUS | LAST UPDATE | MU VERSION |
+-------------+-------+-------+--------+-------------+------------+
+-------------+-------+-------+--------+-------------+------------+

You can also monitor the build logs:

$ mu pipeline logs -f
[Container] 2017/04/07 22:25:43 Running command mu -c mu.yml svc deploy acceptance 
[Container] 2017/04/07 22:25:43 Upsert repo for service 'banana-service' 
[Container] 2017/04/07 22:25:43   No changes for stack 'mu-repo-banana-service' 
[Container] 2017/04/07 22:25:43 Deploying service 'banana-service' to 'dev' from '324320755747.dkr.ecr.us-west-2.amazonaws.com/banana-service:1f1b09f' 

Once the pipeline has completed deployment of the service, you can view logs from service:

$ mu service logs -f acceptance                                                                                                                                                                         
  .   ____          _          __ _ _
 /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| | ) ) ) )
  ' | ____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/

:: Spring Boot ::        (v1.4.0.RELEASE) 
2017-04-07 22:30:08.788  INFO 5 --- [           main] com.stelligent.BananaApplication         : Starting BananaApplication on 6a4d5544d9de with PID 5 (/app.jar started by root in /) 
2017-04-07 22:30:08.824  INFO 5 --- [           main] com.stelligent.BananaApplication         : No active profile set, falling back to default profiles: default 
2017-04-07 22:30:09.342  INFO 5 --- [           main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@108c4c35: startup date [Fri Apr 07 22:30:09 UTC 2017]; root of context hierarchy 
2017-04-07 22:30:09.768  INFO 5 --- [           main] com.stelligent.BananaApplication         : Starting BananaApplication on 7818361f6f45 with PID 5 (/app.jar started by root in /) 

Testing the service

Finally, we can get the information about the ELB endpoint in the acceptance environment to test the service:

$ mu env show acceptance                                                                                                                                                                        

Environment:    acceptance
Cluster Stack:  mu-cluster-dev (UPDATE_COMPLETE)
VPC Stack:      mu-vpc-dev (UPDATE_COMPLETE)
Bastion Host:
Base URL:       http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com
Container Instances:
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
|    EC2 INSTANCE     |   TYPE   |     AMI      |     AZ     | CONNECTED | STATUS | # TASKS | CPU AVAIL | MEM AVAIL |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
| i-093b788b4f39dd14b | t2.micro | ami-62d35c02 | us-west-2a | true      | ACTIVE |       3 |       604 |       139 |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+

Services:
+----------------+---------------------------------------------------------------------+------------------+---------------------+
|    SERVICE     |                                IMAGE                                |      STATUS      |     LAST UPDATE     |
+----------------+---------------------------------------------------------------------+------------------+---------------------+
| banana-service | 324320755747.dkr.ecr.us-west-2.amazonaws.com/banana-service:1f1b09f | CREATE_COMPLETE  | 2017-04-07 15:25:43 |
+----------------+---------------------------------------------------------------------+------------------+---------------------+


$ curl -s http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas | jq

[
  {
    "pickedAt": "2017-04-10T10:34:27.911",
    "peeled": false,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-EcsEl-1K74542METR82-1781937931.us-west-2.elb.amazonaws.com/bananas/1"
      }
    ]
  }
]

Cleanup

To cleanup the resources that mu created, run the following commands:

$ mu pipeline term
$ mu env term acceptance
$ mu env term production

Conclusion

As you can see, mu addresses infrastructure and engineering overhead costs associated with microservices.  It makes deployment of microservices via containers simple and cost-efficient.  Additionally, it ensures the deployments are repeatable and non-dramatic by utilizing a continuous delivery pipeline for orchestrating the flow of software changes into production.

In the upcoming posts in this blog series, we will look into:

  • Test Automation –  add test automation to the continuous delivery pipeline with mu
  • Custom Resources –  create custom resources like DynamoDB with mu during our microservice deployment
  • Service Discovery – use mu to enable service discovery via Consul to allow for inter-service communication
  • Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack

Until then, head over to stelligent/mu on GitHub and get started.  Keep in touch with us in our Gitter room and share your feedback!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If you are Stelligent is hiring and we would love to hear from you!

Docker Swarm Mode on AWS

Docker Swarm Mode is the latest entrant in a large field of container orchestration systems. Docker Swarm was originally released as a standalone product that ran master and agent containers on a cluster of servers to orchestrate the deployment of containers. This changed with the release of Docker 1.12 in July of 2016. Docker Swarm Mode is now officially part of docker-engine, and built right into every installation of Docker. Swarm Mode brought many improvements over the standalone Swarm product, including:

  • Built-in Service Discovery: Docker Swarm originally included drivers to integrate with Consul, etcd or Zookeeper for the purposes of Service Discovery. However, this required the setup of a separate cluster dedicated to service discovery. The Swarm Mode manager nodes now assign a unique DNS name to each service in the cluster, and load balances between the running containers in those services.
  • Mesh Routing: One of the most unique features of Docker Swarm Mode is  Mesh Routing. All of the nodes within a cluster are aware of the location of every container within the cluster via gossip. This means that if a request arrives on a node that is not currently running the service for which that request was intended, the request will be routed to a node that is running a container for that service. This makes it so that nodes don’t have to be purpose built for specific services. Any node can run any service, and every node can be load balanced equally, reducing complexity and the number of resources needed for an application.
  • Security: Docker Swarm Mode uses TLS encryption for communication between services and nodes by default.
  • Docker API: Docker Swarm Mode utilizes the same API that every user of Docker is already familiar with. No need to install or learn additional software.
  • But wait, there’s more! Check out some of the other features at Docker’s Swarm Mode Overview page.

For companies facing increasing complexity in Docker container deployment and management, Docker Swarm Mode provides a convenient, cost-effective, and performant tool to meet those needs.

Creating a Docker Swarm cluster

 cloudcraft-docker-swarm-architecture-2

For the sake of brevity, I won’t reinvent the wheel and go over manual cluster creation here. Instead, I encourage you to follow the fantastic tutorial on Docker’s site.

What I will talk about however is the new Docker for AWS tool that Docker recently released. This is an AWS Cloudformation template that can be used to quickly and easily set up all of the necessary resources for a highly available Docker Swarm cluster, and because it is a Cloudformation template, you can edit the template to add any additional resources, such as Route53 hosted zones or S3 buckets to your application.

One of the very interesting features of this tool is that it dynamically configures the listeners for your Elastic Load Balancer (ELB). Once you deploy a service on Docker Swarm, the built-in management service that is baked into instances launched with Docker for AWS will automatically create a listener for any published ports for your service. When a service is removed, that listener will subsequently be removed.

If you want to create a Docker for AWS stack, read over the list of prerequisites, then click the Launch Stack button below. Keep in mind you may have to pay for any resources you create. If you are deploying Docker for AWS into an older account that still has EC2-Classic, or wish to deploy Docker for AWS into an existing VPC, read the FAQ here for more information.

cloudformation-launch-stack

Deploying a Stack to Docker Swarm

With the release of Docker 1.13 in January of 2017, major enhancements were added to Docker Swarm Mode that greatly improved its ease of use. Docker Swarm Mode now integrates directly with Docker Compose v3 and officially supports the deployment of “stacks” (groups of services) via docker-compose.yml files. With the new properties introduced in Docker Compose v3, it is possible to specify node affinity via tags, rolling update policies, restart policies, and desired scale of containers. The same docker-compose.yml file you would use to test your application locally can now be used to deploy to production. Here is a sample service with some of the new properties:

version: "3"
services:

  vote:
    image: dockersamples/examplevotingapp_vote:before
    ports:
      - 5000:80
    networks:
      - frontend
    deploy:
      replicas: 2
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == worker]
  
networks:
  frontend:

While most of the properties within this YAML structure will be familiar to anyone used to Docker Compose v2, the deploy property is new to v3. The replicas field indicates the number of containers to run within the service. The update_config  field tells the swarm how many containers to update in parallel and how long to wait between updates. The restart_policy field determines when a container should be restarted. Finally, the placement field allows container affinity to be set based on tags or node properties, such as Node Role. When deploying this docker-compose file locally, using docker-compose up, the deploy properties are simply ignored.

Deployment of a stack is incredibly simple. Follow these steps to download Docker’s example voting app stack file and run it on your cluster.

SSH into any one of your Manager nodes with the user 'docker' and the EC2 Keypair you specified when you launched the stack.

curl -O https://raw.githubusercontent.com/docker/example-voting-app/master/docker-stack.yml

docker stack deploy -c docker-stack.yml vote

You should now see Docker creating your services, volumes and networks. Now run the following command to view the status of your stack and the services running within it.

docker stack ps vote

You’ll get output similar to this:

counter_api_-_root_ip-10-0-0-114___home_ubuntu_-_ssh_-i____ssh_jim-labs-ohio_pem_docker_52_14_83_166_-_214x43

This shows the container id, container name, container image, node the container is currently running on, its desired and current state, and any errors that may have occurred. As you can see, the vote_visualizer.1 container failed at run time, so it was shut down and a new container spun up to replace it.

This sample application opens up three ports on your Elastic Load Balancer (ELB): 5000 for the voting interface, 5001 for the real-time vote results interface, and 8080 for the Docker Swarm visualizer. You can find the DNS Name of your ELB by either going to the EC2 Load Balancers page of the AWS console, or viewing your Cloudformation stack Outputs tab in the Cloudformation page of the AWS Console. Here is an example of the Cloudformation Outputs tab:

cloudformation_management_console_%f0%9f%94%8a

DefaultDNSTarget is the URL you can use to access your application.

If you access the Visualizer on port 8080, you will see an interface similar to this:

visualizer_%f0%9f%94%8a

This is a handy tool to see which containers are running, and on which nodes.

Scaling Services

Scaling services is as simple as running the command docker service scale SERVICENAME=REPLICAS, for example:

docker service scale vote_vote=3

will scale the vote service to 3 containers, up from 2. Because Docker Swarm uses an overlay network, it is able to run multiple containers of the same service on the same node, allowing you to scale your services as high as your CPU and Memory allocations will allow.

Updating Stacks

If you make any changes to your docker-compose file, updating your stack is incredibly easy. Simply run the same command you used to create your stack:

docker stack deploy -c docker-stack.yml vote

Docker Swarm will update any services that were changed from the previous version, and adhere to any update_configs specified in the docker-compose file. In the case of the vote service specified above, only one container will be updated at a time, and a 10 second delay will occur once the first container is successfully updated before the second container is updated.

Next Steps

This was just a brief overview of the capabilities of Docker Swarm Mode in Docker 1.13. For further reading, feel free to explore the Docker Swarm Mode and Docker Compose docs. In another post, I’ll be going over some of the advantages and disadvantages of Docker Swarm Mode compared to other container orchestration systems, such as ECS and Kubernetes.

If you have any experiences with Docker Swarm Mode that you would like to share, or have any questions on any of the materials presented here, please leave a comment below!

References

Docker Swarm Mode

Docker for AWS

Docker Example Voting App Github

Containerized CI Solutions in AWS – Part 1: Jenkins in ECS

In this first post of a series exploring containerized CI solutions, I’m going to be addressing the CI tool with the largest market share in the space: Jenkins. Whether you’re already running Jenkins in a more traditional virtualized or bare metal environment, or if you’re using another CI tool entirely, I hope to show you how and why you might want to run your CI environment using Jenkins in Docker, particularly on Amazon EC2 Container Service (ECS). If I’ve done my job right and all goes well, you should have run a successful Jenkins build on ECS well within a half hour from now!

For more background information on ECS and provisioning ECS resources using CloudFormation, please feel free to check out Stelligent’s two-part blog series on the topic.

An insanely quick background on Jenkins

Jenkins is an open source CI tool written in Java. One of its strengths is the very large collection of plugins available, including one for ECS. The Amazon EC2 Container Service Plugin can launch containers on your ECS cluster that automatically register themselves as Jenkins slaves, execute the appropriate Jenkins job on the container, and then automatically remove the container/build slave afterwards.

But, first…why?

But before diving into the demo, why would you want to run your CI builds in containers? First, containers are portable, which, especially when also utilizing Docker for your development environment, will give you a great deal of confidence that if your application builds in a Dockerized CI environment, it will build successfully locally and vice-versa. Next, even if you’re not using Docker for your development environment, a containerized CI environment will give you the benefit of an immutable build infrastructure where you can be sure that you’re building your application in a new ephemeral environment each time. And last but certainly not least, provisioning containers is very fast compared to virtual machines, which is something that you will notice immediately if you’re used to spinning up VMs/cloud instances for build slaves like with the Amazon EC2 Plugin.

As for running the Jenkins master on ECS, one benefit is fast recovery if the Jenkins EC2 instance goes down. When using EFS for Jenkins state storage and a multi-AZ ECS cluster like in this demo, the Jenkins master will recover very quickly in the event of an EC2 container instance failure or AZ outage.

Okay, let’s get down to business…

Let’s begin: first launch the provided CloudFormation stack by clicking the button below:

cloudformation-launch-stack

You’ll have to enter these parameters:

  • AvailabilityZone1: an AZ that your AWS account has access to
  • AvailabilityZone2: another accessible AZ in the same region as AvailabilityZone1
  • InstanceType: EC2 instance type for ECS container instances (must be at least t2.small for this demo)
  • KeyPair: a key pair that will allow you to SSH into the ECS container instances, if necessary
  • PublicAccessCIDR: a CIDR block that will have access to view the public Jenkins proxy and SSH into container instances (ex: 8.8.8.8/32)
    • NOTE: Jenkins will not automatically be secured by a user and password, so this parameter can be used to secure your Jenkins master by limiting network access to the provided CIDR block.  If you’d like to limit access to Jenkins to only your public IP address, enter “[YOUR_PUBLIC_IP_ADDRESS]/32” here, or if you’d like to allow access to the world (and then possibly secure Jenkins yourself afterwards) enter “0.0.0.0/0“.
pfwrzolfughok
It’s almost this easy, but just click Launch Stack once

Okay, the stack is launching—so what’s going on here?

template1-designer (1).png
Ohhh, now I get it

In a nutshell, this CloudFormation stack provisions a VPC containing a multi-AZ ECS cluster, and a Jenkins ECS service that uses Amazon Elastic File System (Amazon EFS) storage to persist Jenkins data. For ease of use, this CloudFormation stack also contains a basic NGINX reverse proxy that allows you to view Jenkins via a public endpoint. Both Jenkins and NGINX each consist of an ECS service, ECS task definition, and classic ELB (internal for Jenkins, and Internet-facing for the proxy).

In actuality, I think that a lot of organizations would choose to keep Jenkins internal in a private subnet and rely on a VPN for outside access to Jenkins. Instead, to keep things relatively simple, this stack only creates public subnets and relies on security groups for network access control.

Complications

There are a couple of reasons why running a Jenkins master on ECS is a bit complicated. One is that there is an ECS limitation that allows you to only associate one load balancer with an ECS service, and Jenkins runs as a single Java application that listens for web traffic on one port and for JNLP connections for build slaves on another port (defaults are 8080 and 50000, respectively). When launching a workload in ECS, using an Elastic Load Balancer for service discovery as I’m doing in this example, and provisioning using CloudFormation, you need to use a Classic Load Balancer that is listening on both Jenkins ports (listening on multiple ports is not currently possible with the recently revealed Application Load Balancer).

Another complication is that Jenkins stores its state in XML on disk, as opposed to some other CI tools that allow you to use an external database to store state (examples coming later in this blog series). This is why I chose to use EFS in this stack—when requiring persistent data in an ECS container, you must be able to sync Docker volumes between your ECS container instances because a container for your service can run on any container instance in the cluster. EFS provides a valuable solution to this issue by allowing you to mount an NFS file system that is shared amongst all the container instances in your cluster.

Coffee break!

Depending on how long you took to digest that fancy diagram and my explanation, feel free to grab a cup of coffee; the stack took about 7-8 minutes to complete successfully during my testing. When you see that beautiful CREATE_COMPLETE in the stack status, continue on.

Jenkins configuration

One of the CloudFormation stack outputs is PublicJenkinsURL; navigate to that URL in your browser and you should see the Jenkins home page (at least within a minute, once the instance is in service):

ZPqcBMQTItMVGdWHWWmnXCEx0YhFDkF3sl6gheTcOLaAs6W4rr6_AuXcv6Mcab5jyFShhFN4IVbFmYky9ucQs52V-7Fc8zQMzht5DVbpOb6phcLnBRJw0o3LWl9Ta8ffWhYjrorw

To make things easier, let’s click ENABLE AUTO REFRESH (in the upper-right) right off the bat.

Then click Manage Jenkins > Manage Plugins, navigate to the Available tab, and select these two plugins (you can filter the plugins by each name in the Filter text box):

  • Amazon EC2 Container Service Plugin
  • Git plugin
    • NOTE: there are a number of “Git” plugins, but you’ll want the one that’s just named “Git plugin

And click Download now and install after restart.

Select the Restart Jenkins when installation is complete and no jobs are running checkbox at the bottom, and Jenkins will restart after the plugins are downloaded.

1RwO9qdrA2SM1wXdqfEAAJjIwIPhB5Qe_V8rojRIS0x9Hfuylp-6Y5Zvlsl-skYl5A4oErYhomVqqKlktaB9H3xkWRWUwkX9fvaFRukbp_uOLt52JEaKSXmD-xGkF1G8TYFZN16l

When Jenkins comes back after restarting, go back to the Jenkins home screen, and navigate to Manage Jenkins > Configure System.

Scroll down to the Cloud section, click Add a new cloud > Amazon EC2 Container Service Cloud, and enter the following configuration (substituting the CloudFormation stack output where indicated):

  • Name: ecs
  • Amazon ECS Credential: – none –  (because we’re using the IAM role of the container instance instead)
  • Amazon ECS Region Name: us-east-1  (or the region you launched your stack in)
  • ECS Cluster: [CloudFormation stack output: JenkinsConfigurationECSCluster]
  • Click Advanced…
  • Alternative Jenkins URL: [CloudFormation stack output: JenkinsConfigurationAlternativeJenkinsURL]
  • Click ECS slave templates > Add
    • Label: jnlp-slave-with-java-build-tools
    • Docker Image: cloudbees/jnlp-slave-with-java-build-tools:latest
    • Filesystem root: /home/jenkins
    • Memory: 512
    • CPU units: 512

And click Save at the bottom.

That should take you back to the Jenkins home page again. Now click New Item, and enter an item name of aws-java-sample, select Freestyle project, and click OK.

88Z_cNtNyFOzI7Zw6Ycx4HEY5w1EwyuFo5tQt9CJj8roVWw9ZKakz0ct5vcERuPrdxVKNzT044ITh-DA7ikBoWuXxmKgmvvfuBxd36DCIumL-un8YRHDRhIrYWmS8GTgvsCplLy7

Enter the following configuration:

  • Make sure Restrict where this project can be run is selected and set:
    • Label Expression: jnlp-slave-with-java-build-tools
  • Under Source Code Management, select Git and enter:

4lZv6pRKVdJwZ9S_E6SszBxPgfJpcNnKpg3tatxPiUSgQ0PRPVu8l97gwUpM7l5ErMrl3r4E53n5Sszjnsa5HJS38ZdyxdlYsaYAHMrX5phlq0Ql-zSYJY22vRbVY60zcID8iofu

  • Under Build, click Add build step > Execute shell, and set:
    • Command: mvn package

Click Save.

That’s it for the Jenkins configuration. Now click Build Now on the left side of the screen.

6T8-oOG8wwgLbX8wQIpaR4pZRK4BBxrhDl8vqByksc573JgFml9AAz3BlayV4-U6ahFdznqRldUzrAkF3-jVltNLORmTxDOGT4PaRX8TP6uyGCNAOUCWX0SlrqEIrZK8fEcVqOTI

Under Build History, you’re going to see a “pending – waiting for next available executor” message, which will switch to a progress bar when the ECS container starts.  When the progress bar appears (it might take a couple of minutes for the first build while ECS downloads the Docker build slave image, but after this it should only take a few seconds when the image is cached on your ECS container instance), click it and you’ll see the console output for the build:

ZAYOyiDeswgo22AcHATKfs7u2sJ8c8YkE65qKZu4VnmZepuJ8Xx-7RaiZCl4Sl9STS3l_ITifMB-HBWVRaTTi1I6TfQIrPbrcQaDLNnUikGKSxOrhjCCGWmlnNOKOakJj3kG1jPL

And Success!

Okay, Maven is downloading a bunch of dependencies…and more dependencies…and more dependencies…and finally building…and see that “Finished: SUCCESS?” Congratulations, you just ran a build in an ECS Jenkins build slave container!

Next Steps

One thing that you may have noticed is that we used a Docker image provided by CloudBees (the enterprise backers of Jenkins). For your own projects, you might need to build and use a custom build slave Docker image. You’ll probably want to set up a pipeline for each of these Docker builds (and possibly publish to Amazon ECR), and configure an ECS slave template that uses this custom image. One caveat: Jenkins slaves need to have Java installed, which, depending on your build dependencies, may increase the size of your Docker image somewhat significantly (well, relatively so for a Docker image). For reference, check out the Dockerfile of a bare-bones Jenkins build slave provided by the Jenkins project on Docker Hub.

Next Next Steps

Pretty cool, right? Well, while it’s the most popular, Jenkins isn’t the only player in the game—stay tuned for a further exploration and comparison of containerized CI solutions on AWS in this blog series!

Interested in Docker, Jenkins, and/or working someplace where your artful use of monkey GIFs will finally be truly appreciated? Stelligent is hiring!

Automating ECS: Orchestrating in CodePipeline and CloudFormation (Part 2)

In my first post on automating the EC2 Container Service (ECS), I described how I automated the provisioning of ECS in AWS CloudFormation using its JSON-based DSL.

In this second and last part of the series, I will demonstrate how to create a deployment pipeline in AWS CodePipeline to deploy changes to ECS Docker images in the EC2 Container Registry (ECR).

In doing this, you’ll not only see how to automate the creation of the infrastructure but also automate the deployment of the application and its infrastructure via Docker containers. This way you can commit infrastructure, application and deployment changes as code to your version-control repository and have these changes automatically deployed to production or production-like environments.

The benefit is the customer responsiveness this embodies: you can deploy new features or fixes to users in minutes, not days or weeks.

Pipeline Architecture

In the figure below, you see the high-level architecture for the deployment pipeline

 

Deployment Pipeline Architecture
Deployment Pipeline Architecture for ECS

With the exception of the CodeCommit repository creation, most of the architecture is implemented in a CloudFormation template. Some of this is the result of not requiring a traditional configuration management tool to perform configuration on compute instances.

CodePipeline is a Continuous Delivery service that enables you to orchestrate every step of your software delivery process in a workflow that consists of a series of stages and actions. These actions perform the steps of your software delivery process.

In CodePipeline, I’ve defined two stages: Source and Build. The Source stage retrieves code artifacts via a CodeCommit repository whenever someone commits a new change. This initiates the pipeline. CodePipeline is integrated with the Jenkins Continuous Integration server. The Build stage updates the ECS Docker image (which runs a small PHP web application) within ECR and makes the new application available through an ELB endpoint.

Jenkins is installed and configured on an Amazon EC2 instance within an Amazon Virtual Private Cloud (VPC). The CloudFormation template runs commands to install and configure the Jenkins server, install and configure Docker, install and configure the CodePipeline plugin and configure the job that’s run as part of the CodePipeline build action. The Jenkins job is configured to run a bash script that’s committed to the CodeCommit repository. This bash script updates the ECS service and task definition by running a Docker build, tag and push to the ECR repository. I describe the implementation of this architecture in more detail in this post.

Jenkins

In this example, CodePipeline manages the orchestration of the software delivery workflow. Since CodePipeline doesn’t actually execute the actions, you need to integrate it with an execution platform. To perform the execution of the actions, I’m using the Jenkins Continuous Integration server. I’ll configure a CodePipeline plugin for Jenkins so that Jenkins executes certain CodePipeline actions.

In particular, I have an action to update an ECS service. I do this by running a CloudFormation update on the stack. CloudFormation looks for any differences in the templates and applies those changes to the existing stack.

To orchestrate and execute this CloudFormation update, I configure a CodePipeline custom action that calls a Jenkins job. In this Jenkins job, I call a shell script passing several arguments.

Provision Jenkins in CloudFormation

In the CloudFormation template, I create an EC2 instance on which I will install and configure the Jenkins server. This CloudFormation script is based on the CodePipeline starter kit.

To launch a Jenkins server in CloudFormation, you will use the AWS::EC2::Instance resource. Before doing this, you’ll be creating an IAM role and an EC2 security group to the already provisioned VPC (the VPC provisioning is part of the CloudFormation script).

Within the Metadata attribute of the resource (i.e. the EC2 instance on which Jenkins will run), you use the AWS::CloudFormation::Init resource to define the user data configuration. To apply your changes, you call cfn-init to run commands on the EC2 instance like this:

"/opt/aws/bin/cfn-init -v -s ",

Then, you can install and configure Docker:

"# Install Docker\n",
"cd /tmp/\n",
"yum install -y docker\n",

On this same instance, you will install and configure the Jenkins server:

"# Install Jenkins\n",
...
"yum install -y jenkins-1.658-1.1\n",
"service jenkins start\n",

And, apply the dynamic Jenkins configuration for the job so that it updates the CloudFormation stack based on arguments passed to the shell script.

"/bin/sed -i \"s/MY_STACK/",
{
"Ref":"AWS::StackName"
},
"/g\" /tmp/config-template.xml\n",

In the config-template.xml, I added tokens that get replaced as part of the commands run from the CloudFormation template. You can see a snippet of this below in which the command for the Jenkins job makes a call to the configure-ecs.sh bash script with some tokenized parameters.

<command>bash ./configure-ecs.sh MY_STACK MY_ACCTID MY_ECR</command>

All of the commands for installing and configuring the Jenkins Server, Docker, the CodePipeline plugin and Jenkins jobs are described in the CloudFormation template that is hosted in the version-control repository.

Jenkins Job Configuration Template

In the previous code snippets from CloudFormation, you see that I’m using sed to update a file called  config-template.xml. This is a Jenkins job configuration file for which I’m updating some token variables with dynamic information that gets passed to it from CloudFormation. This information is used to run a bash script to update the CloudFormation stack – which is described in the next section.

ECS Service Script to Update CloudFormation Stack

The code snippet below shows how the bash script captures that arguments that are passed by the Jenkins job into bash variables. Later in the script, it uses these bash variables to make a call the update-stack command in the CloudFormation API to apply a new ECS Docker image to the endpoint.

MY_STACK=$1
MY_ACCTID=$2
MY_ECR=$3

uuid=$(date +%s)
awsacctid="$MY_ACCTID"
ecr_repo="$MY_ECR"
ecs_stack_name="$MY_STACK"
ecs_template_url="$MY_URL"

In the code snippet below of the configure-ecs.sh script, I’m building, tagging and pushing to the Docker repository in my EC2 Container Registry repository using the dynamic values passed to this script from Jenkins (which were initially passed from the parameters and resources of my CloudFormation script).

In doing this, it creates a new Docker image for each commit and tags it with a unique id based on date and time. Finally, it uses the AWS CLI to call the update-stack command of the CloudFormation API using the variable information.

eval $(aws --region us-east-1 ecr get-login)

# Build, Tag and Deploy Docker
docker build -t $ecr_repo:$uuid .
docker tag $ecr_repo:$uuid $awsacctid.dkr.ecr.us-east-1.amazonaws.com/$ecr_repo:$uuid
docker push $awsacctid.dkr.ecr.us-east-1.amazonaws.com/$ecr_repo:$uuid

aws cloudformation update-stack --stack-name $ecs_stack_name \ 
--template-url $ecs_template_url --region us-east-1 \
--capabilities="CAPABILITY_IAM" --parameters \ 
ParameterKey=AppName,UsePreviousValue=true \
ParameterKey=ECSRepoName,UsePreviousValue=true \ ParameterKey=DesiredCapacity,UsePreviousValue=true \ ParameterKey=KeyName,UsePreviousValue=true \ ParameterKey=RepositoryBranch,UsePreviousValue=true \ ParameterKey=RepositoryName,UsePreviousValue=true \ ParameterKey=InstanceType,UsePreviousValue=true \ ParameterKey=MaxSize,UsePreviousValue=true \ ParameterKey=S3ArtifactBucket,UsePreviousValue=true \ ParameterKey=S3ArtifactObject,UsePreviousValue=true \ ParameterKey=SSHLocation,UsePreviousValue=true \ ParameterKey=YourIP,UsePreviousValue=true \ ParameterKey=ImageTag,ParameterValue=$uuid

Now that you see the basics of install and configuring Jenkins in CloudFormation and what happens when the Jenkins is run through the CodePipeline orchestration, let’s look at the steps for configuring the CodePipeline part of the CodePipeline/Jenkins configuration.

Create a Pipeline using AWS CodePipeline

Before I create a working pipeline, I prefer to model the stages and actions in CodePipeline using Lambda so that I can think through the workflow. To do this I refer to my blog post on Mocking AWS CodePipeline pipelines with Lambda. I’m going to create a two-stage pipeline consisting of a Source and a Build stage. These stages and the actions in these stages are described in more detail below.

Define a Custom Action

There are five types of action categories in CodePipeline: Source, Build, Deploy, Invoke and Test. Each action has four attributes: category, owner, provider and version. There are codepipeline_ecsthree types of action owners: AWS, ThirdParty and Custom. AWS refers to built-in actions provided by AWS. Currently, there are four built-in action providers from AWS: S3, CodeCommit, CodeDeploy and ElasticBeanstalk. Examples of ThirdParty action providers include RunScope and GitHub. If none of the action providers suit your needs, you can define custom actions in CodePipeline. In my case, I wanted to run a script from a Jenkins job so I used the CloudFormation sample configuration from the CodePipeline starter kit for the configuration of the custom build action that I use to integrate Jenkins with CodePipeline. See the snippet below.

    "CustomJenkinsActionType":{
      "Type":"AWS::CodePipeline::CustomActionType",
      "DependsOn":"JenkinsHostWaitCondition",
      "Properties":{
        "Category":"Build",
        "Provider":{
          "Fn::Join":[
            "",
            [
              {
                "Ref":"AppName"
              },
              "-Jenkins"
            ]
          ]
        },
        "Version":"1",
        "ConfigurationProperties":[
          {
            "Key":"true",
            "Name":"ProjectName",
            "Queryable":"true",
            "Required":"true",
            "Secret":"false",
            "Type":"String"
          }
        ],
        "InputArtifactDetails":{
          "MaximumCount":5,
          "MinimumCount":0
        },
        "OutputArtifactDetails":{
          "MaximumCount":5,
          "MinimumCount":0
        },
        "Settings":{
          "EntityUrlTemplate":{
            "Fn::Join":[
              "",
              [
                "http://",
                {
                  "Fn::GetAtt":[
                    "JenkinsServer",
                    "PublicIp"
                  ]
                },
                "/job/{Config:ProjectName}"
              ]
            ]
          },
          "ExecutionUrlTemplate":{
            "Fn::Join":[
              "",
              [
                "http://",
                {
                  "Fn::GetAtt":[
                    "JenkinsServer",
                    "PublicIp"
                  ]
                },
                "/job/{Config:ProjectName}/{ExternalExecutionId}"
              ]
            ]
          }
        }
      }
    },

The example pipeline that I’ve defined in CodePipeline (and described as code in CloudFormation) uses the above custom action in the Build stage of the pipeline, which is described in more detail in the Build Stage section later.

Source Stage

The Source stage has a single action to look for any changes to a CodeCommit repository. If it discovers any new commits, it retrieves the the artifacts from the CodeCommit and stores them in an encrypted form in an S3 bucket. If it’s successful, it transitions to the next stage: Build. A snippet from the CodePipeline resource definition for the Source stage in CloudFormation is shown below.

        "Stages":[
          {
            "Name":"Source",
            "Actions":[
              {
                "InputArtifacts":[

                ],
                "Name":"Source",
                "ActionTypeId":{
                  "Category":"Source",
                  "Owner":"AWS",
                  "Version":"1",
                  "Provider":"CodeCommit"
                },
                "OutputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-SourceArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "Configuration":{
                  "BranchName":{
                    "Ref":"RepositoryBranch"
                  },
                  "RepositoryName":{
                    "Ref":"RepositoryName"
                  }
                },
                "RunOrder":1
              }
            ]
          },

Build Stage

The Build stage invokes actions to create a new ECS repository if one doesn’t exist, builds and tags a Docker image and makes a call to a CloudFormation template to launch the rest of the ECS environment – including creating an ECS cluster, task definition, ECS services, ELB, Security Groups and IAM resources. It does this using the custom CodePipeline action for Jenkins that I described earlier. A snippet from the CodePipeline resource definition in CloudFormation for the Build stage is shown below.

          {
            "Name":"Build",
            "Actions":[
              {
                "Name":"DeployPHPApp",
                "InputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-SourceArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "ActionTypeId":{
                  "Category":"Build",
                  "Owner":"Custom",
                  "Version":"1",
                  "Provider":{
                    "Fn::Join":[
                      "",
                      [
                        {
                          "Ref":"AWS::StackName"
                        },
                        "-Jenkins"
                      ]
                    ]
                  }
                },
                "OutputArtifacts":[
                  {
                    "Name":{
                      "Fn::Join":[
                        "",
                        [
                          {
                            "Ref":"AWS::StackName"
                          },
                          "-BuiltArtifact"
                        ]
                      ]
                    }
                  }
                ],
                "Configuration":{
                  "ProjectName":{
                    "Ref":"AWS::StackName"
                  }
                },
                "RunOrder":1
              }
            ]
          }

The custom action for Jenkins (via the CodePipeline plugin) is looking for work from CodePipeline. When it finds work, it performs the task associated with the CodePipeline action. In this case, it runs the Jenkins job that calls the configure-ecs.sh script. This bash script makes a update-stack call to the original CloudFormation template passing in the new image via the ImageTag parameter which is the new tag generated for the Docker image created as part of this script.

CloudFormation seeks to run the minimum necessary changes to the infrastructure based on the stack update. In this case, I’m only providing a new image tag but this results in creating a new ECS task definition for the service. In your CloudFormation events console, you’ll see a message similar to the one below:

AWS::ECS::TaskDefinition Requested update requires the creation of a new physical resource; hence creating one.

As I mentioned in part 1 of this series, I defined a DeploymentConfiguration type with a MinimumHealthyPercent property of 0 since I’m only using one EC2 instance as running through the earlier stages of the pipeline. This means the application experiences a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I’d increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.

Other Stages

In the example I provided, I stop at the Build stage. If you were to take this to production, you might include other stages as well. Perhaps you might have a “Staging” stage in which you might include actions to deploy the application to the ECS containers using a production-like configuration which might include more instances in the Auto Scaling Group.

Once Staging is complete, the pipeline would automatically transition to the Production stage where it might make Lambda calls to test the application running in ECS containers. If everything looks ok, it switches the Route 53 hosted zone endpoint to the new container.

Launch the ECS Stack and Pipeline

In this section, you’ll launch the CloudFormation stack that creates the ECS and Pipeline resources.

Prerequisites

You need to have already created an ECR repository and a CodeCommit repository to successfully launch this stack. For instructions on creating an ECR repository, see part 1 of this series (or to directly launch the CloudFormation stack to create this ECR repository, click this button: .) For creating a CodeCommit repository, you can either see part 1 or use the instructions described at: Create and Connect to an AWS CodeCommit Repository.

Launch the Stack

Click the button below to launch a CloudFormation stack that provisions the ECS environment including all the resources previously described such as CodePipeline, ECS Cluster, ECS Task Definition, ECS Service, ELB, VPC resources, IAM Roles, etc.

You’ll enter values for the following parameters: RepositoryNameYourIPKeyName, and ECSRepoName.

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):

aws cloudformation create-stack --stack-name ecs-stack-1648 --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/ecs-pipeline.json --region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" --parameters ParameterKey=RepositoryName,ParameterValue=YOURCCREPO ParameterKey=RepositoryBranch,ParameterValue=master ParameterKey=KeyName,ParameterValue=YOUREC2KEYPAIR ParameterKey=YourIP,ParameterValue=YOURIP/32 ParameterKey=ECSRepoName,ParameterValue=YOURECRREPO ParameterKey=ECSCFNURL,ParameterValue=NOURL ParameterKey=AppName,ParameterValue=app-name-1648

Outputs

Once the CloudFormation stack successfully launches, there are several outputs but the two most relevant are AppURL and CodePipelineURL. You can click on the AppURL value to launch the PHP application running on ECS from the ELB endpoint. The CodePipelineURL output value launches the generated pipeline from the CodePipeline console. See the screenshot below.

codepipeline_beanstalk_cfn_outputs  

Access the Application

Once the stack successfully completes, go to the Outputs tab for the CloudFormation stack and click on the AppURL value to launch the application.

codepipeline_ecs_php_app_before

Commit Changes to CodeCommit

Make some visual changes to the code and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to pink"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser.

codepipeline_ecs_php_app_after

Making Modifications

While the solution can work “straight out of the box”, if you’d like to make some changes, I’ve included a few sections of the code that you’ll need to modify.

configure-ecs.sh

The purpose of the configure-ecs.sh Bash script is to run the Docker commands to build, tag and push the image along with updating the existing CloudFormation stack to update the ECS service and task. The source for this bash script is here: https://github.com/stelligent/cloudformation_templates/blob/master/labs/ecs/configure-ecs.sh. I hard coded the ecs_template_url variable to a specific S3 location. You can either download the source file from one of these two locations: GitHub or S3 to make your desired modifications and then modify the ecs_template_url variable to the new location (presumably in S3).

config-template.xml

The purpose of the config-template.xml file is the Jenkins job configuration for the update ECS action. This XML file contains tokens that get replaced from the ecs-pipeline.json CloudFormation template with dynamic information like the CloudFormation stack name, account id, etc. This XML file is obtained via a wget command from within the template. The file is stored in S3 at https://s3.amazonaws.com/stelligent-training-public/public/jenkins/config-template.xml so you can modify the S3 location to your account while updating the CloudFormation template to point to the new location. In doing this, you can modify any of the behavior of the updates to the file when used by Jenkins.

Summary

In this series, you learned how to use CloudFormation to fully automate the provisioning of the Elastic Container Service along with a CodePipeline pipeline that uses CodeCommit as its version-control repository so that whenever a change is made to the Git repo, the changes are automatically applied to a PHP application hosted on ECS images.

By modeling your pipeline in CodePipeline you can apply even more stages and actions as part of your Continuous Delivery process so that it runs through all the tests and other checks enabling you to deliver changes to the production whenever there’s a business need to do so.

Sample Code

The code for the examples demonstrated in this post are located at https://github.com/stelligent/cloudformation_templates/tree/master/labs/ecs. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Notes

The sample solution currently only works in the us-east-1 AWS region. You will be charged for your AWS usage – including EC2, S3, CodePipeline and other services.

Resources

Here’s a list of some of the resources described or were influenced in this post:

 

Automating ECS: Provisioning in CloudFormation (Part 1)

In this two-part series, you’ll learn how to provision, configure, and orchestrate the EC2 Container Service (ECS) applications into a deployment pipeline that’s capable of deploying new infrastructure and code changes when developers commit changes to a version-control repository so that team members can release new changes to users whenever they choose to do so: Continuous Delivery.

While the primary AWS service described in this solution is ECS, I’ll also be covering the various components and services that support this solution including AWS CloudFormationEC2 Container Registry (ECR), Docker, Identity and Access Management (IAM), VPC and Auto Scaling Services – to name a few. In part 2, I’ll be covering the integration of CodePipeline, Jenkins and CodeCommit in greater detail.

ECS allows you to run Docker containers on Amazon. The benefits of ECS and Docker include the following:

  • Portability – You can build on one Linux operating system and have it work on others without modification. It’s also portable across environment types so you can build it in development and use the same image in production.
  • Scalability – You can run multiple images on the same EC2 instance to scale thousands of tasks across a cluster.
  • Speed – Increase your speed of development and speed of runtime execution.

“ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.” [1]

The reason you might use Docker-based containers over traditional virtual machine-based application deployments is that it allows a faster, more flexible, and still very robust immutable deployment pattern in comparison with services such as traditional Elastic Beanstalk, OpsWorks, or native EC2 instances.

While you can very effectively integrate Docker into Elastic Beanstalk, ECS provides greater overall flexibility.

The reason you might use ECS or Elastic Beanstalk containers with EC2 Container Registry over similar offerings such as Docker Hub or Docker Trusted Registry is higher performance, better availability, and lower pricing. In addition, ECR utilizes other AWS services such as IAM and S3, allowing you to compose more secure or robust patterns to meet your needs.

Based on the current implementation of Lambda, the reasons you might choose to utilize ECS instead of serverless architectures include:

  • Lower latency in request response time
  • Flexibility in the underlying language stack to use
  • Elimination of AWS Lambda service limits (requests per second, code size, total code runtime)
  • Greater control of the application runtime environment
  • The ability to link modules in ways not possible with Lambda functions

I’ll be using a sample PHP application provided by AWS to demonstrate Continuous Delivery pipeline using ECS, CloudFormation and, in part 2, AWS CodePipeline.

Create and Connect to a CodeCommit Repository

While you can store your application code in any version-control repository, in this example, I’ll be using the AWS CodeCommit Git repository. I’ll be integrating CodeCommit with CodePipeline. I’m basing the code from the Amazon ECS PHP Simple Demo App located at https://github.com/awslabs/ecs-demo-php-simple-app.

To create your own CodeCommit repo,  follow these instructions: Create and Connect to an AWS CodeCommit Repository. Take note of the repository name as you’ll be using it as a CloudFormation user parameter in part 2. I called my CodeCommit repository ecs-demo. You can call it the same but if you do name it something different, be sure to replace the samples with your repo name.

After you create your CodeCommit repo, copy the contents from the AWS PHP ECS Demo app and commit all of the files.

CodeCommit provides the following features and benefits[2]:

  • Highly available, Secure and Private Git repositories
  • Use your existing Git tools
  • Automatically encrypts all files in transit and at rest
  • Provides Webhooks – to trigger Lambda functions or push notifications in response to events
  • Integrated with other AWS services like IAM so you can define user-specific permissions

Create a Private Image Repository in ECS using ECR

codepipeline_ecr_archYou can create private Docker repositories using ECS Repositories (ECR) to store your Docker images. Follow these instructions to manually create an ECR: Create a Repository.

A snippet of the CloudFormation template for provisioning an ECR repo is listed below.

    "MyRepository":{
      "Type":"AWS::ECR::Repository",
      "Properties":{
        "RepositoryName":{
          "Ref":"AWS::StackName"
        },
        "RepositoryPolicyText":{
          "Version":"2008-10-17",
          "Statement":[
            {
              "Sid":"AllowPushPull",
              "Effect":"Allow",
              "Principal":{
                "AWS":[
                  {
                    "Fn::Join":[
                      "",
                      [
                        "arn:aws:iam::",
                        {
                          "Ref":"AWS::AccountId"
                        },
                        ":user/",
                        {
                          "Ref":"IAMUsername"
                        }
                      ]
                    ]
                  }
                ]
              },
              "Action":[
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload"
              ]
            }
          ]
        }
      }
    }

In defining an ECR, you can securely store your Docker images and refer to them when building, tagging and pushing these Docker images.

To launch the CloudFormation stack to create an ECR repository, click this button: . Your IAM username is a parameter to this CloudFormation template. You only need to enter the IAM username (and not the entire ARN) as the input value. Make note of the ECSRepository Output from the stack as you’ll be using this as an input to the ECS Environment Stack in part 2.

Docker

“Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.” [3] In this demonstration, you’ll build, tag and push a PHP application as a Docker image into an ECR repository.

Build Docker Image and Upload to ECR Locally

Prerequisites

  • You’re running these commands from an Amazon Linux EC2 instance. If you’re not, you’ll need to adapt the instructions according to your OS flavor.
  • You’ve created an ECR repo (see the “Create a Private Image Repository in ECS using ECR” section above)
  • You’ve created a CodeCommit repository and committed the PHP code from the AWS PHP app in GitHub (see the “Create and Connect to a CodeCommit Repository” section above)

Steps

  1. Install Docker on an Amazon Linux EC2 instance for which your AWS CLI has been configured (you can find detailed instructions at Install Docker)
    sudo yum update -y
    sudo yum install -y docker
    sudo service docker start
    sudo usermod -a -G docker ec2-user
  2. Logout and log back in and type:
    docker info
  3. Install Git:
    sudo yum -y install git*
  4. Clone the ECS PHP example application (if you used a different repo name, be sure to update the sample command here):
    git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/ecs-demo
  5. Change your directory:
    cd ecs-demo
  6. Configure your AWS account by running the command below and following the prompts to enter your credentials, region and output format.
    aws configure
  7. Run the command below to login to ECR.
    eval $(aws --region us-east-1 ecr get-login)
  8. Build the image using Docker. Replace REPOSITORY_NAME with the ECSRepository Output from the ECR stack you launched and TAG with a unique value. Make note of the name the image tag you’re using in creating the Docker image as you’ll be using it as a input parameter to a CloudFormation stack later. If you want to use the default value, just name it latest.
    docker build -t REPOSITORY_NAME:TAG .
  9. Tag the image (replace REPOSITORY_NAME, TAG and AWS_ACCOUNT_ID):
    docker tag REPOSITORY_NAME:TAG AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/REPOSITORY_NAME:TAG
  10. Push the tagged image to ECR (replace REPOSITORY_NAME, AWS_ACCOUNT_ID and TAG):
    docker push AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/REPOSITORY_NAME:TAG
  11. Verify the image was uploaded to your ECS Repository by going to your AWS ECS Console, clicking on Repositories and selecting the repository you created when you launched the ECS Stack.

Dockerfile

“A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.” [4] The snippet you see below is the Dockerfile to run the PHP sample application. You can see that it runs OS updates, installs the required packages including apache and PHP and then configures the HTTP server and port. While these are types of steps you might run in any automated build and deployment script, the difference is that it’s running these steps within a container which means that it runs very quickly, you can run these same steps across operating systems, and you can run these procedures across multiple tasks in a cluster.

FROM ubuntu:12.04

# Install dependencies
RUN apt-get update -y
RUN apt-get install -y git curl apache2 php5 libapache2-mod-php5 php5-mcrypt php5-mysql

# Install app
RUN rm -rf /var/www/*
ADD src /var/www

# Configure apache
RUN a2enmod rewrite
RUN chown -R www-data:www-data /var/www
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D",  "FOREGROUND"]

This Dockerfile gets run when you run the docker build command. This file has been committed to my CodeCommit repo as you can see in the figure below.

ecs_codecommit
AWS CodeCommit repository for a PHP application illustrating Dockerfile location

Create an ECS Environment in CloudFormation

In this section, I’m describing the how to configure the entire ECS stack in CloudFormation. This includes the architecture, its dependencies, and the key CloudFormation resources that make up the stack.

Architecture

The overall solution architecture is illustrated in the CloudFormation diagram below.

codepipeline_ecs_arch.jpg
Provisioning, Configuring and Orchestrating an EC2 Container Service Architecture
  • Auto Scaling Group – I’m using an auto scaling group to scale the underlying EC2 infrastructure in the ECS Cluster. It’s used in conjunction with the Launch Configuration.
  • Auto Scaling Launch Configuration – I’m using a launch configuration to scale the underlying EC2 infrastructure in the ECS Cluster. It’s used in conjunction with the  Auto Scaling Group.
  • CodeCommit – I’m using CodeCommit as my Git repo to store the application and infrastructure code.
  • CodePipeline – CodePipeline describes my Continuous Delivery workflow. In particular, it integrates with CodeCommit and Jenkins to run actions every time someone commits new code to the CodeCommit repo. This will be covered in more detail in part 2.
  • ECS Cluster – “An ECS cluster is a logical grouping of container instances that you can place tasks on.”[6]
  • ECS Service – With an ECS service, you can run a specific number of instances of a task definition simultaneously in an ECS cluster [5]
  • ECS Task Definition – A task definition is the core resource within ECS. This is where you define which Docker images to run, CPU/Memory, ports, commands and so on. Everything else in ECS is based upon the task definition
  • Elastic Load Balancer – The ELB provides the endpoint for the application. The ELB dynamically determines which EC2 instance in the cluster is serving the running ECS tasks at any given time.
  • IAM Instance Profile – “An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.” [7] In the sample, I’m using the instance profile to define the roles for which launch configurations use as part of the underlying EC2 instance that the ECS cluster runs
  • IAM Roles – I’m describing roles that have access to certain AWS resources for the EC2 instances (for ECS), Jenkins and CodePipeline
  • Jenkins – I’m using Jenkins to execute the actions that I’ve defined in CodePipeline. For example, I have a bash script that updates the CloudFormation stack when an ECS Service is update. This action is orchestrated via CodePipeline and then executed on te Jenkins server on one of its configured jobs. This will be covered in more detail in part 2.
  • Virtual Private Cloud (VPC) – In the CloudFormation template, I’m using a VPC template that we developed to define VPC resources such as: VPCGatewayAttachment, SecurityGroup, SecurityGroupIngress, SecurityGroupEgress, SubnetNetworkAclAssociation, NetworkAclEntry, NetworkAcl, SubnetRouteTableAssociation, Route, RouteTable, InternetGateway, and Subnet

Dependencies

There are four core dependencies in this solution: EC2 Key Pair, CodeCommit Repo, a VPC, and an ECR repo and Docker Image

  • EC2 Key Pair – A key pair for which you have access. See Create a Key Pair.
  • CodeCommit – In this demo, I’m using an AWS CodeCommit Git repo to store the PHP application code along with my Docker configuration. See the instructions for configuring a Git repo in CodeCommit above
  • VPC – This template requires an existing AWS Virtual Private Cloud has been created
  • ECR repo and image – You should have created an E2 Container Service Repository (ECR) using the CloudFormation template from the previous section. You should have also built, tagged and pushed a Docker image to ECR using the instructions described at Create a Private Image Repository in ECS using ECR above

ECS Cluster

With an ECS Cluster, you can manage multiple services. An ECS Container Instance runs an ECS agent that is registered to the ECS Cluster. To define an ECS Cluster in CloudFormation, use the Cluster resource: AWS::ECS::Cluster as shown below.

    "EcsCluster":{
      "Type":"AWS::ECS::Cluster",
      "DependsOn":[
        "MyVPC"
      ]
    },

ECS Service

An ECS Service defines a task definition and a desired number of task instances. A service manages tasks of a specified task definition.

In the context of ECS, an ELB distributes load between the different EC2 instances hosting your tasks, so you can optionally create a new ELB when creating a service.

To define an ECS Service in CloudFormation, use the Service resource: AWS::ECS::Service.

    "EcsService":{
      "Type":"AWS::ECS::Service",
      "DependsOn":[
        "MyVPC",
        "ECSAutoScalingGroup"
      ],
      "Properties":{
        "Cluster":{
          "Ref":"EcsCluster"
        },
        "DesiredCount":"1",
        "DeploymentConfiguration":{
          "MaximumPercent":100,
          "MinimumHealthyPercent":0
        },
        "LoadBalancers":[
          {
            "ContainerName":"php-simple-app",
            "ContainerPort":"80",
            "LoadBalancerName":{
              "Ref":"EcsElb"
            }
          }
        ],
        "Role":{
          "Ref":"EcsServiceRole"
        },
        "TaskDefinition":{
          "Ref":"PhpTaskDefinition"
        }
      }
    },

Notice that I defined a DeploymentConfiguration with a MinimumHealthyPercent of 0. Since I’m only using one EC2 instance in development, the ECS service would fail during a CloudFormation update so by setting the MinimumHealthyPercent to zero, the application will experience a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I’d increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.

Task Definition

With an ECS Task Definition, you can define multiple Container Definitions and volumes. With a Container Definition, you define port mappings, environment variables, CPU Units and Memory. An ECS Volume is a persistent volume to mount and map to container volumes.

To define an ECS Task Definition, use the ECS Task Definition resource: AWS::ECS::TaskDefinition.

    "PhpTaskDefinition":{
      "Type":"AWS::ECS::TaskDefinition",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "ContainerDefinitions":[
          {
            "Name":"php-simple-app",
            "Cpu":"10",
            "Essential":"true",
            "Image":{
              "Fn::Join":[
                "",
                [
                  {
                    "Ref":"AWS::AccountId"
                  },
                  ".dkr.ecr.us-east-1.amazonaws.com/",
                  {
                    "Ref":"ECSRepoName"
                  },
                  ":",
                  {
                    "Ref":"ImageTag"
                  }
                ]
              ]
            },
            "Memory":"300",
            "PortMappings":[
              {
                "HostPort":80,
                "ContainerPort":80
              }
            ]
          }
        ],
        "Volumes":[
          {
            "Name":"my-vol"
          }
        ]
      }
    },

Auto Scaling

To define an Auto Scaling Group, use the Auto Scaling Group resource in CloudFormation: AWS::AutoScaling::AutoScalingGroup.

    "ECSAutoScalingGroup":{
      "Type":"AWS::AutoScaling::AutoScalingGroup",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "VPCZoneIdentifier":[
          {
            "Ref":"publicSubnet01"
          },
          {
            "Ref":"publicSubnet02"
          }
        ],
        "LaunchConfigurationName":{
          "Ref":"ContainerInstances"
        },
        "MinSize":"1",
        "MaxSize":{
          "Ref":"MaxSize"
        },
        "DesiredCapacity":{
          "Ref":"DesiredCapacity"
        }
      },
      "CreationPolicy":{
        "ResourceSignal":{
          "Timeout":"PT15M"
        }
      },
      "UpdatePolicy":{
        "AutoScalingRollingUpdate":{
          "MinInstancesInService":"1",
          "MaxBatchSize":"1",
          "PauseTime":"PT15M",
          "WaitOnResourceSignals":"true"
        }
      }
    },

To define a Launch Configuration, use the Launch Configuration resource in CloudFormation: AWS::AutoScaling::LaunchConfiguration.

    "ContainerInstances":{
      "Type":"AWS::AutoScaling::LaunchConfiguration",
      "DependsOn":[
        "MyVPC"
      ],
      "Metadata":{
        "AWS::CloudFormation::Init":{
          "config":{
            "commands":{
              "01_add_instance_to_cluster":{
                "command":{
                  "Fn::Join":[
                    "",
                    [
                      "#!/bin/bash\n",
                      "echo ECS_CLUSTER=",
                      {
                        "Ref":"EcsCluster"
                      },
                      " >> /etc/ecs/ecs.config"
                    ]
                  ]
                }
              }
            },
            "files":{
              "/etc/cfn/cfn-hup.conf":{
                "content":{
                  "Fn::Join":[
                    "",
                    [
                      "[main]\n",
                      "stack=",
                      {
                        "Ref":"AWS::StackId"
                      },
                      "\n",
                      "region=",
                      {
                        "Ref":"AWS::Region"
                      },
                      "\n"
                    ]
                  ]
                },
                "mode":"000400",
                "owner":"root",
                "group":"root"
              },
              "/etc/cfn/hooks.d/cfn-auto-reloader.conf":{
                "content":{
                  "Fn::Join":[
                    "",
                    [
                      "[cfn-auto-reloader-hook]\n",
                      "triggers=post.update\n",
                      "path=Resources.ContainerInstances.Metadata.AWS::CloudFormation::Init\n",
                      "action=/opt/aws/bin/cfn-init -v ",
                      "         --stack ",
                      {
                        "Ref":"AWS::StackName"
                      },
                      "         --resource ContainerInstances ",
                      "         --region ",
                      {
                        "Ref":"AWS::Region"
                      },
                      "\n",
                      "runas=root\n"
                    ]
                  ]
                }
              }
            },

>

IAM

To define an IAM Instance Profile, use the InstanceProfile resource in CloudFormation: AWS::IAM::InstanceProfile.

    "EC2InstanceProfile":{
      "Type":"AWS::IAM::InstanceProfile",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "Path":"/",
        "Roles":[
          {
            "Ref":"EC2Role"
          }
        ]
      }
    },
    "JenkinsRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Sid":"",
              "Effect":"Allow",
              "Principal":{
                "Service":"ec2.amazonaws.com"
              },
              "Action":"sts:AssumeRole"
            }
          ]
        },
        "Path":"/"
      }
    },

To define an IAM Role, use the IAM Role resource in CloudFormation: AWS::IAM::Role. The snippet below is for the EC2 role.

    "EC2Role":{
      "Type":"AWS::IAM::Role",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  "ec2.amazonaws.com"
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"ecs-service",
            "PolicyDocument":{
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "ecs:CreateCluster",
                    "ecs:RegisterContainerInstance",
                    "ecs:DeregisterContainerInstance",
                    "ecs:DiscoverPollEndpoint",
                    "ecs:Submit*",
                    "ecr:*",
                    "ecs:Poll"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

The snippet below is for defining the ECS IAM role.

    "EcsServiceRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  "ecs.amazonaws.com"
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"ecs-service",
            "PolicyDocument":{
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "elasticloadbalancing:Describe*",
                    "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                    "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                    "ec2:Describe*",
                    "ec2:AuthorizeSecurityGroupIngress"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

EC2

To define security group ingress within a VPC, use the SecurityGroupIngress resource in CloudFormation: AWS::EC2::SecurityGroupIngress.

    "InboundRule":{
      "Type":"AWS::EC2::SecurityGroupIngress",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"0",
        "ToPort":"65535",
        "SourceSecurityGroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        },
        "GroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        }
      }
    },

To define the security group egress within a VPC, use theSecurityGroupEgress resource in CloudFormation: AWS::EC2::SecurityGroupEgress.

    "OutboundRule":{
      "Type":"AWS::EC2::SecurityGroupEgress",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"0",
        "ToPort":"65535",
        "DestinationSecurityGroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        },
        "GroupId":{
          "Fn::GetAtt":[
            "SourceSG",
            "GroupId"
          ]
        }
      }
    },

To define the security group within a VPC, use the SecurityGroup resource in CloudFormation: AWS::EC2::SecurityGroup.

    "SourceSG":{
      "Type":"AWS::EC2::SecurityGroup",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "VpcId":{
          "Ref":"MyVPC"
        },
        "GroupDescription":"Sample source security group",
        "SecurityGroupIngress":[
          {
            "IpProtocol":"tcp",
            "FromPort":"80",
            "ToPort":"80",
            "CidrIp":"0.0.0.0/0"
          }
        ],
        "Tags":[
          {
            "Key":"Name",
            "Value":{
              "Fn::Join":[
                "",
                [
                  {
                    "Ref":"AWS::StackName"
                  },
                  "-SourceSG"
                ]
              ]
            }
          }
        ]
      }
    },

ELB

To define the ELB, use the LoadBalancer resource in CloudFormation: AWS::ElasticLoadBalancing::LoadBalancer.

    "EcsElb":{
      "Type":"AWS::ElasticLoadBalancing::LoadBalancer",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "Subnets":[
          {
            "Ref":"publicSubnet01"
          },
          {
            "Ref":"publicSubnet02"
          }
        ],
        "Listeners":[
          {
            "LoadBalancerPort":"80",
            "InstancePort":"80",
            "Protocol":"HTTP"
          }
        ],
        "SecurityGroups":[
          {
            "Ref":"SourceSG"
          },
          {
            "Ref":"TargetSG"
          }
        ],
        "HealthCheck":{
          "Target":"HTTP:80/",
          "HealthyThreshold":"2",
          "UnhealthyThreshold":"10",
          "Interval":"30",
          "Timeout":"5"
        }
      }
    },

Summary

In this first part of the series, you learned how to use CloudFormation to fully automate the provisioning of the EC2 Container Service and Docker which includes ELB, Auto Scaling, and VPC resources. You also learned how to setup a CodeCommit repository.

In the next and last part of this series, you’ll learn how to orchestrate all of the changes into a deployment pipeline to achieve Continuous Delivery using CodePipeline and Jenkins so that any change made to the CodeCommit repo can be deployed to production in an automated fashion. I’ll provide access to all the code resources in part 2 of this series. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Resources

Here’s a list of some of the resources described in this post:

Acknowledgements

My colleague Jeff Bachtel provided the thoughts on reasons why some teams might choose to use Docker and ECS over serverless. I also used several resources from AWS including the PHP sample app, the Introduction to AWS CodeCommit video, the CodePipeline Starter Kit and the ECS CloudFormation snippets.