Service discovery for microservices with mu

mu is a tool that makes it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this fourth post of the blog series focused on the mu tool, we will use mu to setup Consul for service discovery between multiple microservices.  

Why do I need service discovery?

One of the biggest benefits of a microservices architecture is that the services can be deployed independently of one another.  However, this presents a new challenge in that it becomes difficult for clients to know the list of containers to use when invoking the service.  Here are three different approaches to address this challenge:

  • Load balancer per microservice: Create a load balancer for every microservice and add/remove containers to the load balancer as deployments and scaling events occur.  The endpoint address of the load balancer is then shared with clients through some manual process.

cloudcraft - Microservices - multip.png

There are three concerns with this approach.  First, the endpoint address of the load balancer must never change or else all the clients will be broken and require updates to take the new endpoint address.  This can be addressed via DNS CNAME records, but still requires that the name chosen for the record must not change.  Second, there is the additional cost of a load balancer for every microservice.  Finally, there is additional latency introduced with adding a load balancer between each microservice invocation.

  • Shared load balancer: Create a load balancer that is shared by all microservices in an environment.  The load balancer must have rules for each microservice to route requests by URI patterns.

ms-architecture-3

The concern with this approach is that all traffic is now flowing through a single load balancer which can become a constraint in scaling the entire system.  Additionally, the load balancer becomes a shared resource amongst all the microservice teams, potentially impacting a team’s ability to operate independently of other teams.

  • Client load balancer: Load balancing from within the client is an approach in which the client has an awareness of all the containers in-service for a given microservice.  The client can then load balance between the containers when invoking the microservice.  This approach requires a system to provide service registration and service discovery.   

cloudcraft - mu-bananaservice-v3

The benefit with this approach is there are no longer load balancers between each microservice request so all the concerns with those prior approaches are addressed.  However, a new type microservice, an edge service, will need to be deployed to allow clients outside the microservice environment (that do not have access to service discovery) to invoke the service.

The preferred approach is the third approach which uses service discovery and client side load balancing within the microservice environment and edge routing with traditional load balancing for clients outside the microservice environment.  This approach provides the lowest latency and most loosely coupled solution for microservice invocation.

Let mu help!

The environment that mu creates for your microservice can manage the provisioning of Consul for service discovery and registration of your microservices.  Consul is a sort of phonebook for microservices.  It provides APIs for services to register their endpoints and for clients to lookup the endpoints.

Let’s demonstrate this by adding an additional milkshake service to the invoke the banana service from the first post.  Additionally, we will create a zuul router service to provide an edge service via Netflix’s Zuul.  Zuul is a proxy service that serves as the front door for all requests from outside the microservice environment.  Zuul will use Consul for service discovery to determine where best to route the incoming request.  Additionally, Zuul provides an excellent location to enforce policies such as authentication, authorization or logging on all incoming requests.

Enabling Consul and Edge Router

The first thing we will want to do is set up our edge router with Zuul.  This is just a matter of adding the @EnableZuulProxy and @EnableDiscoveryClient annotations to the Spring Boot application:

@SpringBootApplication
@EnableDiscoveryClient
@EnableZuulProxy
public class ZuulRouterApplication {

   public static void main(String[] args) {
     SpringApplication.run(ZuulRouterApplication.class, args);
   }
}

Zuul is configured via the application.yml file in src/main/resources.  For each service that we want exposed via the edge router, we add URI path patterns:

spring:
  application:
    name: zuul-router
zuul:
  routes:
    milkshake-service:
      path: /milkshakes/**
      stripPrefix: false
    banana-service:
      path: /bananas/**
      stripPrefix: false

In order to enable Consul in your environment, you need to update the environment definition in the mu.yml file.  Additionally, you need to configure Spring Cloud Consul to connect to the docker host ip address for service discovery.  We will also want to configure Spring Cloud to not register with Consul, since mu will already configure the Registrator agent on your ECS container instances:

 environments:
 - name: acceptance
   cluster:
     maxSize: 5
   discovery:
     provider: consul
 - name: production

service:
  name: zuul-router
  port: 8080
  pathPatterns:
  - /*
  environment:
    SPRING_CLOUD_CONSUL_HOST: 172.17.0.1
    SPRING_CLOUD_CONSUL_DISCOVERY_REGISTER: 'false'
  pipeline:
    source:
      provider: GitHub
      repo: cplee/zuul-router
    build:
      image: aws/codebuild/java:openjdk-8

Create Milkshake Service

Now we can create a new service to manage the creation of milkshakes.  The service looks very similar to the banana service, with the exception of declaring a Spring RestTemplate annotated with @LoadBalanced to enable client side loadbalancing via Ribbon.

 

@SpringBootApplication
@EnableDiscoveryClient
public class MilkshakeApplication {

  @LoadBalanced
  @Bean
  RestTemplate restTemplate(){
     return new RestTemplate();
  }
}

Now we can use the RestTemplate to make calls directly to the banana service.  Ribbon will do a lookup in Consul for a service named banana-service and replace it in the URL with one of the container’s IP and port:

@Component
public class BananaProvider implements FlavorProvider {

  @Autowired
  private RestTemplate restTemplate;

  private List<Map<String,Object>> getAll() {
    ParameterizedTypeReference<List<Map<String, Object>>> typeRef =
            new ParameterizedTypeReference<List<Map<String, Object>>>() {};

    ResponseEntity<List<Map<String, Object>>> exchange =
            this.restTemplate.exchange("http://banana-service/bananas",HttpMethod.GET,null, typeRef);

    return exchange.getBody();
  } 

Try it out!

After we have deployed all three services, we can use mu to confirm that all are running as expected.

~ ❯❯❯ mu env show acceptance                                                                                                                                                                                                       

Environment:    acceptance
Cluster Stack:  mu-cluster-dev (UPDATE_COMPLETE)
VPC Stack:      mu-vpc-dev (UPDATE_COMPLETE)
Bastion Host:   35.164.117.25
Base URL:       http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com

Container Instances:
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
|    EC2 INSTANCE     |   TYPE   |     AMI      |     AZ     | CONNECTED | STATUS | # TASKS | CPU AVAIL | MEM AVAIL |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
| i-08e3edc8c644f0534 | t2.micro | ami-62d35c02 | us-west-2b | true      | ACTIVE |       3 |       604 |       139 |
| i-05bc14a67e53889e1 | t2.micro | ami-62d35c02 | us-west-2a | true      | ACTIVE |       3 |       604 |       139 |
| i-0b56a0d9572531e9e | t2.micro | ami-62d35c02 | us-west-2c | true      | ACTIVE |       3 |       604 |       139 |
| i-05b2188a5c575fbeb | t2.micro | ami-62d35c02 | us-west-2b | true      | ACTIVE |       1 |       624 |       739 |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+

Services:
+-------------------+---------------------------+------------------+---------------------+
|      SERVICE      |         IMAGE             |      STATUS      |     LAST UPDATE     |
+-------------------+---------------------------+------------------+---------------------+
| milkshake-service | milkshake-service:9e4bcd9 | CREATE_COMPLETE  | 2017-05-12 11:33:05 |
| zuul-router       | zuul-router:3d4795c       | UPDATE_COMPLETE  | 2017-05-12 12:09:47 | 
| banana-service    | banana-service:3b62124    | UPDATE_COMPLETE  | 2017-05-12 11:32:55 |
+-------------------+---------------------------+------------------+---------------------+

We can then use curl to get a list of all the bananas available via the banana-service:

curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq
[
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/9"
      }
    ]
  }
]

Next we try to create a milkshake using the milkshake-service:

~ ❯❯❯ curl -s -d "{}" -H "Content-Type: application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/milkshakes\?flavor\=Banana | jq                                                                         
{
  "timestamp": "2017-05-15T19:12:56.640+0000",
  "status": 500,
  "error": "Internal Server Error",
  "exception": "org.springframework.web.client.HttpClientErrorException",
  "message": "429 Not enough bananas to make the shake.",
  "path": "/milkshakes"
}

Looks like there aren’t enough bananas to create a milkshake.  Let’s create another one:

~ ❯❯❯ curl -s -d "{}" -H "Content-Type: application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas

~ ❯❯❯ curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq                                                                                                                         
[
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/9"
      }
    ]
  },
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/10"
      }
    ]
  }
]

Now let’s try again creating a milkshake:

~ ❯❯❯ curl -s -d "{}" -H "application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/milkshakes\?flavor\=Banana | jq                                                                      
{
  "id": 3,
  "flavor": "Banana"
}

This time it worked, and if we query the list of bananas again, we see that 2 have been deleted for the milkshake:

~ ❯❯❯ curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq                                                                                                                        
[]

Conclusion

Decomposing a monolithic application into microservices presents an interesting challenge in enabling services to invoke one another while still keeping the services loosely coupled.  Using a client side load balancer like Ribbon along with a service discovery tool like Consul provide an excellent solution to this challenge.  As demonstrated in this post, mu makes it simple to enable service discovery in your microservice environment to help achieve this solution.  Head over to stelligent/mu on GitHub and get started!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Microservice databases with mu

mu is a tool that makes it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this third post of the blog series focused on the mu tool, we will use mu to manage microservice databases in the pipeline we built in the first post.  

Why should my microservice manage the database?

As discussed in prior posts, adopting a microservice architecture can increase a team’s ability to deliver software faster through decoupling and team autonomy.  By decomposing an application into microservices and then giving teams complete ownership of their microservices, the teams can then make decisions and implement changes independent of other teams and their microservices.

Unless the same approach is taken to decompose the databases that support the microservices, the benefits of microservices will be limited by the cross team dependencies on shared databases. When your microservices share a database then in effect you’ve used the database as an API between the services.  This type of architecture causes tight coupling between services and likely will require regression testing and even deployment of multiple services at the same time.

Martin Fowler, in his post titled Microservices, says “Microservices prefer letting each service manage its own database.”  By decomposing all the way down into the database you can realize the benefits of agility that microservices has to offer.

decentralised-data
Source: https://martinfowler.com/articles/microservices.html

Let mu help!

blog1The continuous delivery pipeline that mu creates for your microservice can manage the provisioning of a database.  Additionally, the details about the database can be injected into your service as environment variables.

Let’s demonstrate this by adding a database to the microservice pipeline we created in the first post for the banana service.

Define the database

Previously, the banana service was using an embedded H2 database.  This won’t work in a production environment so we need an RDS database instance that the microservice can use.  Adding a database for a service with mu is as simple as adding a couple lines to your mu.yml file:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana

By default, this will create an RDS database instance of size db.t2.small with the aurora engine.  Next we need to reference the database from our microservice.  We can pass the database URL and credentials via environment variables:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana

  environment:
    SPRING_DATASOURCE_USERNAME: ${DatabaseMasterUsername}
    SPRING_DATASOURCE_PASSWORD: ${DatabaseMasterPassword}
    SPRING_DATASOURCE_URL: jdbc:mysql://${DatabaseEndpointAddress}:${DatabaseEndpointPort}/${DatabaseName}

This approach does have the disadvantage of passing database credentials as environment variables.  This presents a security issue as any IAM user/role with access to ECS task API would be able to discover the credentials.

AWS has recently announced IAM database authentication that can be utilized to obtain temporary database credentials from the microservice via an AWS API call.  Although we will save the details for a future blog post, for now it’s worth mentioning that mu can configure the database for IAM database authentication to work around this issue of passing credentials as environment variables.  This would be accomplished with a mu.yml like this:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana
    instanceClass: db.t2.medium
    iamAuthentication: true

  environment:
       SPRING_DATASOURCE_URL: jdbc:mysql://${DatabaseEndpointAddress}:${DatabaseEndpointPort}/${DatabaseName}

The configuration of the tables and the data in the database is managed with Liquibase. When the service is started, Liquibase creates/updates the database tables and data. This is accomplished by creating the a file named db.changelog-master.yaml  in src/main/resources/db/changelog/

Now we can commit and push our changes to cause a new run of the pipeline to occur:

$ git add --all && git commit -m "add database" && git push

We see our pipeline is green, so we have confidence that the new database is working properly with the microservice.

Conclusion

Realizing the benefits of microservices requires decomposing not just the application, but also the databases that support it.  As demonstrated in this post, mu makes it simple to manage your database and wire them up to your microservices.  The goal is that mu empowers you to implement microservice best practices in your application.

In the upcoming posts in this blog series, we will look into:

  • Service Discovery – use mu to enable service discovery via `Consul` to allow for inter-service communication
  • Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack

Until then, head over to stelligent/mu on GitHub and get started!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Automating ECS: Provisioning in CloudFormation (Part 1)

In this two-part series, you’ll learn how to provision, configure, and orchestrate the EC2 Container Service (ECS) applications into a deployment pipeline that’s capable of deploying new infrastructure and code changes when developers commit changes to a version-control repository so that team members can release new changes to users whenever they choose to do so: Continuous Delivery.

While the primary AWS service described in this solution is ECS, I’ll also be covering the various components and services that support this solution including AWS CloudFormationEC2 Container Registry (ECR), Docker, Identity and Access Management (IAM), VPC and Auto Scaling Services – to name a few. In part 2, I’ll be covering the integration of CodePipeline, Jenkins and CodeCommit in greater detail.

ECS allows you to run Docker containers on Amazon. The benefits of ECS and Docker include the following:

  • Portability – You can build on one Linux operating system and have it work on others without modification. It’s also portable across environment types so you can build it in development and use the same image in production.
  • Scalability – You can run multiple images on the same EC2 instance to scale thousands of tasks across a cluster.
  • Speed – Increase your speed of development and speed of runtime execution.

“ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.” [1]

The reason you might use Docker-based containers over traditional virtual machine-based application deployments is that it allows a faster, more flexible, and still very robust immutable deployment pattern in comparison with services such as traditional Elastic Beanstalk, OpsWorks, or native EC2 instances.

While you can very effectively integrate Docker into Elastic Beanstalk, ECS provides greater overall flexibility.

The reason you might use ECS or Elastic Beanstalk containers with EC2 Container Registry over similar offerings such as Docker Hub or Docker Trusted Registry is higher performance, better availability, and lower pricing. In addition, ECR utilizes other AWS services such as IAM and S3, allowing you to compose more secure or robust patterns to meet your needs.

Based on the current implementation of Lambda, the reasons you might choose to utilize ECS instead of serverless architectures include:

  • Lower latency in request response time
  • Flexibility in the underlying language stack to use
  • Elimination of AWS Lambda service limits (requests per second, code size, total code runtime)
  • Greater control of the application runtime environment
  • The ability to link modules in ways not possible with Lambda functions

I’ll be using a sample PHP application provided by AWS to demonstrate Continuous Delivery pipeline using ECS, CloudFormation and, in part 2, AWS CodePipeline.

Create and Connect to a CodeCommit Repository

While you can store your application code in any version-control repository, in this example, I’ll be using the AWS CodeCommit Git repository. I’ll be integrating CodeCommit with CodePipeline. I’m basing the code from the Amazon ECS PHP Simple Demo App located at https://github.com/awslabs/ecs-demo-php-simple-app.

To create your own CodeCommit repo,  follow these instructions: Create and Connect to an AWS CodeCommit Repository. Take note of the repository name as you’ll be using it as a CloudFormation user parameter in part 2. I called my CodeCommit repository ecs-demo. You can call it the same but if you do name it something different, be sure to replace the samples with your repo name.

After you create your CodeCommit repo, copy the contents from the AWS PHP ECS Demo app and commit all of the files.

CodeCommit provides the following features and benefits[2]:

  • Highly available, Secure and Private Git repositories
  • Use your existing Git tools
  • Automatically encrypts all files in transit and at rest
  • Provides Webhooks – to trigger Lambda functions or push notifications in response to events
  • Integrated with other AWS services like IAM so you can define user-specific permissions

Create a Private Image Repository in ECS using ECR

codepipeline_ecr_archYou can create private Docker repositories using ECS Repositories (ECR) to store your Docker images. Follow these instructions to manually create an ECR: Create a Repository.

A snippet of the CloudFormation template for provisioning an ECR repo is listed below.

    "MyRepository":{
      "Type":"AWS::ECR::Repository",
      "Properties":{
        "RepositoryName":{
          "Ref":"AWS::StackName"
        },
        "RepositoryPolicyText":{
          "Version":"2008-10-17",
          "Statement":[
            {
              "Sid":"AllowPushPull",
              "Effect":"Allow",
              "Principal":{
                "AWS":[
                  {
                    "Fn::Join":[
                      "",
                      [
                        "arn:aws:iam::",
                        {
                          "Ref":"AWS::AccountId"
                        },
                        ":user/",
                        {
                          "Ref":"IAMUsername"
                        }
                      ]
                    ]
                  }
                ]
              },
              "Action":[
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload"
              ]
            }
          ]
        }
      }
    }

In defining an ECR, you can securely store your Docker images and refer to them when building, tagging and pushing these Docker images.

To launch the CloudFormation stack to create an ECR repository, click this button: . Your IAM username is a parameter to this CloudFormation template. You only need to enter the IAM username (and not the entire ARN) as the input value. Make note of the ECSRepository Output from the stack as you’ll be using this as an input to the ECS Environment Stack in part 2.

Docker

“Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.” [3] In this demonstration, you’ll build, tag and push a PHP application as a Docker image into an ECR repository.

Build Docker Image and Upload to ECR Locally

Prerequisites

  • You’re running these commands from an Amazon Linux EC2 instance. If you’re not, you’ll need to adapt the instructions according to your OS flavor.
  • You’ve created an ECR repo (see the “Create a Private Image Repository in ECS using ECR” section above)
  • You’ve created a CodeCommit repository and committed the PHP code from the AWS PHP app in GitHub (see the “Create and Connect to a CodeCommit Repository” section above)

Steps

  1. Install Docker on an Amazon Linux EC2 instance for which your AWS CLI has been configured (you can find detailed instructions at Install Docker)
    sudo yum update -y
    sudo yum install -y docker
    sudo service docker start
    sudo usermod -a -G docker ec2-user
  2. Logout and log back in and type:
    docker info
  3. Install Git:
    sudo yum -y install git*
  4. Clone the ECS PHP example application (if you used a different repo name, be sure to update the sample command here):
    git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/ecs-demo
  5. Change your directory:
    cd ecs-demo
  6. Configure your AWS account by running the command below and following the prompts to enter your credentials, region and output format.
    aws configure
  7. Run the command below to login to ECR.
    eval $(aws --region us-east-1 ecr get-login)
  8. Build the image using Docker. Replace REPOSITORY_NAME with the ECSRepository Output from the ECR stack you launched and TAG with a unique value. Make note of the name the image tag you’re using in creating the Docker image as you’ll be using it as a input parameter to a CloudFormation stack later. If you want to use the default value, just name it latest.
    docker build -t REPOSITORY_NAME:TAG .
  9. Tag the image (replace REPOSITORY_NAME, TAG and AWS_ACCOUNT_ID):
    docker tag REPOSITORY_NAME:TAG AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/REPOSITORY_NAME:TAG
  10. Push the tagged image to ECR (replace REPOSITORY_NAME, AWS_ACCOUNT_ID and TAG):
    docker push AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/REPOSITORY_NAME:TAG
  11. Verify the image was uploaded to your ECS Repository by going to your AWS ECS Console, clicking on Repositories and selecting the repository you created when you launched the ECS Stack.

Dockerfile

“A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.” [4] The snippet you see below is the Dockerfile to run the PHP sample application. You can see that it runs OS updates, installs the required packages including apache and PHP and then configures the HTTP server and port. While these are types of steps you might run in any automated build and deployment script, the difference is that it’s running these steps within a container which means that it runs very quickly, you can run these same steps across operating systems, and you can run these procedures across multiple tasks in a cluster.

FROM ubuntu:12.04

# Install dependencies
RUN apt-get update -y
RUN apt-get install -y git curl apache2 php5 libapache2-mod-php5 php5-mcrypt php5-mysql

# Install app
RUN rm -rf /var/www/*
ADD src /var/www

# Configure apache
RUN a2enmod rewrite
RUN chown -R www-data:www-data /var/www
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D",  "FOREGROUND"]

This Dockerfile gets run when you run the docker build command. This file has been committed to my CodeCommit repo as you can see in the figure below.

ecs_codecommit
AWS CodeCommit repository for a PHP application illustrating Dockerfile location

Create an ECS Environment in CloudFormation

In this section, I’m describing the how to configure the entire ECS stack in CloudFormation. This includes the architecture, its dependencies, and the key CloudFormation resources that make up the stack.

Architecture

The overall solution architecture is illustrated in the CloudFormation diagram below.

codepipeline_ecs_arch.jpg
Provisioning, Configuring and Orchestrating an EC2 Container Service Architecture
  • Auto Scaling Group – I’m using an auto scaling group to scale the underlying EC2 infrastructure in the ECS Cluster. It’s used in conjunction with the Launch Configuration.
  • Auto Scaling Launch Configuration – I’m using a launch configuration to scale the underlying EC2 infrastructure in the ECS Cluster. It’s used in conjunction with the  Auto Scaling Group.
  • CodeCommit – I’m using CodeCommit as my Git repo to store the application and infrastructure code.
  • CodePipeline – CodePipeline describes my Continuous Delivery workflow. In particular, it integrates with CodeCommit and Jenkins to run actions every time someone commits new code to the CodeCommit repo. This will be covered in more detail in part 2.
  • ECS Cluster – “An ECS cluster is a logical grouping of container instances that you can place tasks on.”[6]
  • ECS Service – With an ECS service, you can run a specific number of instances of a task definition simultaneously in an ECS cluster [5]
  • ECS Task Definition – A task definition is the core resource within ECS. This is where you define which Docker images to run, CPU/Memory, ports, commands and so on. Everything else in ECS is based upon the task definition
  • Elastic Load Balancer – The ELB provides the endpoint for the application. The ELB dynamically determines which EC2 instance in the cluster is serving the running ECS tasks at any given time.
  • IAM Instance Profile – “An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.” [7] In the sample, I’m using the instance profile to define the roles for which launch configurations use as part of the underlying EC2 instance that the ECS cluster runs
  • IAM Roles – I’m describing roles that have access to certain AWS resources for the EC2 instances (for ECS), Jenkins and CodePipeline
  • Jenkins – I’m using Jenkins to execute the actions that I’ve defined in CodePipeline. For example, I have a bash script that updates the CloudFormation stack when an ECS Service is update. This action is orchestrated via CodePipeline and then executed on te Jenkins server on one of its configured jobs. This will be covered in more detail in part 2.
  • Virtual Private Cloud (VPC) – In the CloudFormation template, I’m using a VPC template that we developed to define VPC resources such as: VPCGatewayAttachment, SecurityGroup, SecurityGroupIngress, SecurityGroupEgress, SubnetNetworkAclAssociation, NetworkAclEntry, NetworkAcl, SubnetRouteTableAssociation, Route, RouteTable, InternetGateway, and Subnet

Dependencies

There are four core dependencies in this solution: EC2 Key Pair, CodeCommit Repo, a VPC, and an ECR repo and Docker Image

  • EC2 Key Pair – A key pair for which you have access. See Create a Key Pair.
  • CodeCommit – In this demo, I’m using an AWS CodeCommit Git repo to store the PHP application code along with my Docker configuration. See the instructions for configuring a Git repo in CodeCommit above
  • VPC – This template requires an existing AWS Virtual Private Cloud has been created
  • ECR repo and image – You should have created an E2 Container Service Repository (ECR) using the CloudFormation template from the previous section. You should have also built, tagged and pushed a Docker image to ECR using the instructions described at Create a Private Image Repository in ECS using ECR above

ECS Cluster

With an ECS Cluster, you can manage multiple services. An ECS Container Instance runs an ECS agent that is registered to the ECS Cluster. To define an ECS Cluster in CloudFormation, use the Cluster resource: AWS::ECS::Cluster as shown below.

    "EcsCluster":{
      "Type":"AWS::ECS::Cluster",
      "DependsOn":[
        "MyVPC"
      ]
    },

ECS Service

An ECS Service defines a task definition and a desired number of task instances. A service manages tasks of a specified task definition.

In the context of ECS, an ELB distributes load between the different EC2 instances hosting your tasks, so you can optionally create a new ELB when creating a service.

To define an ECS Service in CloudFormation, use the Service resource: AWS::ECS::Service.

    "EcsService":{
      "Type":"AWS::ECS::Service",
      "DependsOn":[
        "MyVPC",
        "ECSAutoScalingGroup"
      ],
      "Properties":{
        "Cluster":{
          "Ref":"EcsCluster"
        },
        "DesiredCount":"1",
        "DeploymentConfiguration":{
          "MaximumPercent":100,
          "MinimumHealthyPercent":0
        },
        "LoadBalancers":[
          {
            "ContainerName":"php-simple-app",
            "ContainerPort":"80",
            "LoadBalancerName":{
              "Ref":"EcsElb"
            }
          }
        ],
        "Role":{
          "Ref":"EcsServiceRole"
        },
        "TaskDefinition":{
          "Ref":"PhpTaskDefinition"
        }
      }
    },

Notice that I defined a DeploymentConfiguration with a MinimumHealthyPercent of 0. Since I’m only using one EC2 instance in development, the ECS service would fail during a CloudFormation update so by setting the MinimumHealthyPercent to zero, the application will experience a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I’d increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.

Task Definition

With an ECS Task Definition, you can define multiple Container Definitions and volumes. With a Container Definition, you define port mappings, environment variables, CPU Units and Memory. An ECS Volume is a persistent volume to mount and map to container volumes.

To define an ECS Task Definition, use the ECS Task Definition resource: AWS::ECS::TaskDefinition.

    "PhpTaskDefinition":{
      "Type":"AWS::ECS::TaskDefinition",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "ContainerDefinitions":[
          {
            "Name":"php-simple-app",
            "Cpu":"10",
            "Essential":"true",
            "Image":{
              "Fn::Join":[
                "",
                [
                  {
                    "Ref":"AWS::AccountId"
                  },
                  ".dkr.ecr.us-east-1.amazonaws.com/",
                  {
                    "Ref":"ECSRepoName"
                  },
                  ":",
                  {
                    "Ref":"ImageTag"
                  }
                ]
              ]
            },
            "Memory":"300",
            "PortMappings":[
              {
                "HostPort":80,
                "ContainerPort":80
              }
            ]
          }
        ],
        "Volumes":[
          {
            "Name":"my-vol"
          }
        ]
      }
    },

Auto Scaling

To define an Auto Scaling Group, use the Auto Scaling Group resource in CloudFormation: AWS::AutoScaling::AutoScalingGroup.

    "ECSAutoScalingGroup":{
      "Type":"AWS::AutoScaling::AutoScalingGroup",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "VPCZoneIdentifier":[
          {
            "Ref":"publicSubnet01"
          },
          {
            "Ref":"publicSubnet02"
          }
        ],
        "LaunchConfigurationName":{
          "Ref":"ContainerInstances"
        },
        "MinSize":"1",
        "MaxSize":{
          "Ref":"MaxSize"
        },
        "DesiredCapacity":{
          "Ref":"DesiredCapacity"
        }
      },
      "CreationPolicy":{
        "ResourceSignal":{
          "Timeout":"PT15M"
        }
      },
      "UpdatePolicy":{
        "AutoScalingRollingUpdate":{
          "MinInstancesInService":"1",
          "MaxBatchSize":"1",
          "PauseTime":"PT15M",
          "WaitOnResourceSignals":"true"
        }
      }
    },

To define a Launch Configuration, use the Launch Configuration resource in CloudFormation: AWS::AutoScaling::LaunchConfiguration.

    "ContainerInstances":{
      "Type":"AWS::AutoScaling::LaunchConfiguration",
      "DependsOn":[
        "MyVPC"
      ],
      "Metadata":{
        "AWS::CloudFormation::Init":{
          "config":{
            "commands":{
              "01_add_instance_to_cluster":{
                "command":{
                  "Fn::Join":[
                    "",
                    [
                      "#!/bin/bash\n",
                      "echo ECS_CLUSTER=",
                      {
                        "Ref":"EcsCluster"
                      },
                      " >> /etc/ecs/ecs.config"
                    ]
                  ]
                }
              }
            },
            "files":{
              "/etc/cfn/cfn-hup.conf":{
                "content":{
                  "Fn::Join":[
                    "",
                    [
                      "[main]\n",
                      "stack=",
                      {
                        "Ref":"AWS::StackId"
                      },
                      "\n",
                      "region=",
                      {
                        "Ref":"AWS::Region"
                      },
                      "\n"
                    ]
                  ]
                },
                "mode":"000400",
                "owner":"root",
                "group":"root"
              },
              "/etc/cfn/hooks.d/cfn-auto-reloader.conf":{
                "content":{
                  "Fn::Join":[
                    "",
                    [
                      "[cfn-auto-reloader-hook]\n",
                      "triggers=post.update\n",
                      "path=Resources.ContainerInstances.Metadata.AWS::CloudFormation::Init\n",
                      "action=/opt/aws/bin/cfn-init -v ",
                      "         --stack ",
                      {
                        "Ref":"AWS::StackName"
                      },
                      "         --resource ContainerInstances ",
                      "         --region ",
                      {
                        "Ref":"AWS::Region"
                      },
                      "\n",
                      "runas=root\n"
                    ]
                  ]
                }
              }
            },

>

IAM

To define an IAM Instance Profile, use the InstanceProfile resource in CloudFormation: AWS::IAM::InstanceProfile.

    "EC2InstanceProfile":{
      "Type":"AWS::IAM::InstanceProfile",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "Path":"/",
        "Roles":[
          {
            "Ref":"EC2Role"
          }
        ]
      }
    },
    "JenkinsRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Sid":"",
              "Effect":"Allow",
              "Principal":{
                "Service":"ec2.amazonaws.com"
              },
              "Action":"sts:AssumeRole"
            }
          ]
        },
        "Path":"/"
      }
    },

To define an IAM Role, use the IAM Role resource in CloudFormation: AWS::IAM::Role. The snippet below is for the EC2 role.

    "EC2Role":{
      "Type":"AWS::IAM::Role",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  "ec2.amazonaws.com"
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"ecs-service",
            "PolicyDocument":{
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "ecs:CreateCluster",
                    "ecs:RegisterContainerInstance",
                    "ecs:DeregisterContainerInstance",
                    "ecs:DiscoverPollEndpoint",
                    "ecs:Submit*",
                    "ecr:*",
                    "ecs:Poll"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

The snippet below is for defining the ECS IAM role.

    "EcsServiceRole":{
      "Type":"AWS::IAM::Role",
      "Properties":{
        "AssumeRolePolicyDocument":{
          "Statement":[
            {
              "Effect":"Allow",
              "Principal":{
                "Service":[
                  "ecs.amazonaws.com"
                ]
              },
              "Action":[
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path":"/",
        "Policies":[
          {
            "PolicyName":"ecs-service",
            "PolicyDocument":{
              "Statement":[
                {
                  "Effect":"Allow",
                  "Action":[
                    "elasticloadbalancing:Describe*",
                    "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                    "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                    "ec2:Describe*",
                    "ec2:AuthorizeSecurityGroupIngress"
                  ],
                  "Resource":"*"
                }
              ]
            }
          }
        ]
      }
    },

EC2

To define security group ingress within a VPC, use the SecurityGroupIngress resource in CloudFormation: AWS::EC2::SecurityGroupIngress.

    "InboundRule":{
      "Type":"AWS::EC2::SecurityGroupIngress",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"0",
        "ToPort":"65535",
        "SourceSecurityGroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        },
        "GroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        }
      }
    },

To define the security group egress within a VPC, use theSecurityGroupEgress resource in CloudFormation: AWS::EC2::SecurityGroupEgress.

    "OutboundRule":{
      "Type":"AWS::EC2::SecurityGroupEgress",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "IpProtocol":"tcp",
        "FromPort":"0",
        "ToPort":"65535",
        "DestinationSecurityGroupId":{
          "Fn::GetAtt":[
            "TargetSG",
            "GroupId"
          ]
        },
        "GroupId":{
          "Fn::GetAtt":[
            "SourceSG",
            "GroupId"
          ]
        }
      }
    },

To define the security group within a VPC, use the SecurityGroup resource in CloudFormation: AWS::EC2::SecurityGroup.

    "SourceSG":{
      "Type":"AWS::EC2::SecurityGroup",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "VpcId":{
          "Ref":"MyVPC"
        },
        "GroupDescription":"Sample source security group",
        "SecurityGroupIngress":[
          {
            "IpProtocol":"tcp",
            "FromPort":"80",
            "ToPort":"80",
            "CidrIp":"0.0.0.0/0"
          }
        ],
        "Tags":[
          {
            "Key":"Name",
            "Value":{
              "Fn::Join":[
                "",
                [
                  {
                    "Ref":"AWS::StackName"
                  },
                  "-SourceSG"
                ]
              ]
            }
          }
        ]
      }
    },

ELB

To define the ELB, use the LoadBalancer resource in CloudFormation: AWS::ElasticLoadBalancing::LoadBalancer.

    "EcsElb":{
      "Type":"AWS::ElasticLoadBalancing::LoadBalancer",
      "DependsOn":[
        "MyVPC"
      ],
      "Properties":{
        "Subnets":[
          {
            "Ref":"publicSubnet01"
          },
          {
            "Ref":"publicSubnet02"
          }
        ],
        "Listeners":[
          {
            "LoadBalancerPort":"80",
            "InstancePort":"80",
            "Protocol":"HTTP"
          }
        ],
        "SecurityGroups":[
          {
            "Ref":"SourceSG"
          },
          {
            "Ref":"TargetSG"
          }
        ],
        "HealthCheck":{
          "Target":"HTTP:80/",
          "HealthyThreshold":"2",
          "UnhealthyThreshold":"10",
          "Interval":"30",
          "Timeout":"5"
        }
      }
    },

Summary

In this first part of the series, you learned how to use CloudFormation to fully automate the provisioning of the EC2 Container Service and Docker which includes ELB, Auto Scaling, and VPC resources. You also learned how to setup a CodeCommit repository.

In the next and last part of this series, you’ll learn how to orchestrate all of the changes into a deployment pipeline to achieve Continuous Delivery using CodePipeline and Jenkins so that any change made to the CodeCommit repo can be deployed to production in an automated fashion. I’ll provide access to all the code resources in part 2 of this series. Let us know if you have any comments or questions @stelligent or @paulduvall.

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “everything-as-code” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Resources

Here’s a list of some of the resources described in this post:

Acknowledgements

My colleague Jeff Bachtel provided the thoughts on reasons why some teams might choose to use Docker and ECS over serverless. I also used several resources from AWS including the PHP sample app, the Introduction to AWS CodeCommit video, the CodePipeline Starter Kit and the ECS CloudFormation snippets.