Blog

WordPress, Mu, and You

Today, we’re going to give you the “Hello, world” of web stacks and explain how you could run WordPress with mu. This is the fifth post in Stelligent’s blog series on mu, where we’ll lay out a path for using mu to run WordPress and try to hit a target that’s often hard to find in that part of our world: managing a WordPress stack with infrastructure as code, utilizing a continuous delivery pipeline. In that vein, we’re going to keep the feature list as short and simple as possible. To make this easier to follow, we’re going to keep the feature list short and simple. Look for future posts in the Stelligent blog where we’ll talk about adding more functionality.

Why would I want to run WordPress with mu?

mu will take care of a lot of nice things for you. It creates a CodePipeline stack that will manage your deployment workflow, and you’ll have test and production environments out of the box. You can inspect changes in your test instance before you let them continue through the pipeline.  It will manage your databases in RDS, so your backend data layer gets its own high availability and scaling that’s independent of your other resources; your test and production data are also kept separate. Each environment will run in Amazon’s EC2 Container Service, behind a load balancer, so the pipeline can roll out new containers without bringing the old ones down. When it’s all done, you can even export CloudFormation templates for your infrastructure and manage them on your own — you’re never locked into using mu!

We’re going to deploy a Docker container from the official WordPress image.  How do we get there? mu will manage a CodePipeline stack that orchestrates everything: it creates a custom image with your web data and the official image, then uses ECS to deploy a test instance for you to inspect, and then — if you approve it — rolls it out to production. Because we’re using mu, all of this will be done through code, and we’ll use GitHub to make that code available to CodePipeline.

Stelligent created mu to simplify using microservices on AWS. It gives you a simple, flexible front-end to some really sophisticated services that can seem bewildering until you’ve gained a lot of experience with them. All those advanced products from AWS are themselves hiding the complexity of running very robust services across data centers around the world. There is vast machinery hidden behind this relatively simple command-line interface. You benefit from all of this with a really simple process for deploying your WordPress site, and you get to do it with tools native to most developers: a code editor and a command line.

Getting Started

Let’s get down to the nitty gritty. To get started, fork our repository into your own GitHub account then clone it to your workstation. Click the “Fork” button in the top right corner of the page at stelligent/mu-wordpress:

Credit to GitHub; taken from their

Next, clone the new repo from your account to your local workstation:

git clone _your_fork_of_mu-wordpress_
cd mu-wordpress

In this directory, you’ll find a few files. Edit mu’s main config file, mu.yml, changing pipeline.source.repo to point to your own GitHub account instead of “stelligent”:

pipeline:
  source:
    provider: GitHub
    repo: _your_github_username_/mu-wordpress

Commit your changes and push them back up to your GitHub account:

git add mu.yml
git commit -m 'first config' && git push

You’re ready to start up your pipeline:

mu pipeline up

mu will ask you for a GitHub token. CodePipeline uses it to watch your repo for changes so that it can automatically deploy them. Create a new token in your own GitHub account and grant it the “admin:repo_hook” and “admin” permissions. Save it somewhere, like a nice password manager, and then provide it at mu’s prompt.

Now you can watch your pipeline get deployed:

mu pipeline logs -f

Give it a little time – it will probably take about 10 minutes for the pipeline to deploy all the resources. When it’s done, you’ll have your first environment, “test”:

mu env list

You should see a table like this, but it won’t say CREATE_COMPLETE under each environment until they’re done.

+-------------+-----------------------+---------------------+-----+
| ENVIRONMENT |         STACK         |       STATUS        | ... |
+-------------+-----------------------+---------------------+-----+
| test        | mu-cluster-test       | CREATE_COMPLETE     | ... |
+-------------+-----------------------+---------------------+-----+

On to WordPress!

Now that you have a test environment, you can initialize WordPress and make sure it’s working. Start by inspecting “test”:

mu env show test

You’ll see a block at the top that includes “Base URL”:

Environment:    test


Cluster Stack:  mu-cluster-test (UPDATE_IN_PROGRESS)
VPC Stack:      mu-vpc-test (UPDATE_COMPLETE)
Bastion Host:   1.2.3.4
Base URL:       http://mu-cl-some-long-uuid.us-east-1.elb.amazonaws.com

Append “/wp-admin” to that and load the URL in your browser:

http://mu-cl-some-long-uuid.us-east-1.elb.amazonaws.com/wp-admin

Follow the instructions there to set up a WordPress admin user and initialize the database.

First WordPress admin initialization page

Deploy to “production”

If it looks to you like it’s working, load up your CodePipeline console. You can find the URL for it by asking mu for information on your service:

mu service show | head

The first line of output will give you the URL for your pipeline stack:

Pipeline URL:   https://console.aws.amazon.com/codepipeline/home?region=...

Load that page in your web browser. Find the “Production” stage. The first step there is “Approve”, and you’ll see a button labeled “Review”. Fill that in – any text will do – and click the “Approve” button:

Approve or reject a revision in CodePipeline

CodePipeline will deploy a new copy of the container it build into your “prod” environment. When it’s done, you can do the same thing you did with test to initialize WordPress and inspect your new web site. When you’re ready, try these commands to watch it go and get the site URL:

mu pipeline logs -f
mu env show prod |head

Behind the scenes

If you’ve made it this far, I’ll bet you’re wondering what mu is actually doing for you. Let’s take a look. mu brings together a handful of Amazon products. It will:

Those are the pieces. How do they all fit together?

  • GitHub stores your infrastructural code.
  • mu acts as your front-end to AWS by generating and applying CloudFront templates, orchestrated by CodePipeline.
  • CodeBuild combines your content with the official WordPress Docker container, storing the new image in Amazon ECR, then deploying it through ECS.
  • An ECS cluster is run for each environment we define: in this case, “test” and “prod”.
  • An AWS ALB sits in front of each cluster.
  • Your WordPress database will be provided by an Amazon RDS cluster, one for each environment. Each runs Aurora, Amazon’s highly optimized clone of MySQL.

 

Architectural diagram of mu-wordpress

Let’s talk about how your information flows through those pieces. AWS CodePipeline orchestrates all the other services in AWS. It’s used to give you a continuous delivery pipeline:

AWS CodePipeline pipeline created by mu

  1. It watches your GitHub repo for changes and automatically applies them shortly after you push.
  2. AWS CodeBuild uses buildspec.yml to run any custom steps you add there.
  3. AWS CodeBuild generates your own Docker image by combining the results of the last step with the official WordPress image and storing it in Amazon ECR.
  4. Your container is deployed to your “test” environment.
  5. You manually inspect your container and approve or reject it.
  6. If you approve it, your container is deployed to your “prod” environment.


Wrapping it up

If you want to customize your WordPress site, add content to the html directory of your repo. Anything there will be installed with WordPress as the container is built, so you have a simple way to add content to your site’s wp-content directory by adding new plugins or themes to your own codebase. Take a look at your mu.yml file, too – it includes comments that explain the settings used and suggest some things you might want to change, like the instance size deployed to each environment.

We’ve shown a fast and simple way to get WordPress running at AWS. There’s a lot of tech running behind the scenes, but mu makes it easy to get started and navigate that complexity. There’s a lot more we could do: optimizing WordPress and making your stack robust are extensive topics that deserve their own articles, and much has been written about them elsewhere. Look for future posts on the Stelligent blog as we improve our mu-wordpress code, too.

Additional Resources

Are you passionate about working with the latest AWS technologies? Are you interested in helping build tools to help automate deployments? Do you want to engage in spirited debate about how to pronounce Greek letters? Stelligent is hiring!

Vault : A tool for managing secrets.

Over the last few weeks, we’ve been reminded once again of why Cyber Security is of the utmost importance. Not only can security breaches cause a denial of service but they can also cause loss of intellectual property, espionage and many other embarrassing incidents. As a member of IT staff, instead of relying just on security team, every member should also be proactive in identifying security gaps.

In this blog post, we will take one small proactive measure that can go a long way: database passwords.

We’ve seen this often: mission critical application store passwords to mission critical database in a config file, which is copied around on multiple servers and checked-in into the source code repository. More often than not, the password in this config file is unencrypted. This setup not only makes access to your mission critical data vulnerable to outside threats but also to a disgruntled employee, or other insider threats.

There is a better solution, one which addresses both insider and outsider threats. We’ll address the solution to both of these problems using Hashicorp Vault. Although Vault has many use cases, in this blog we’ll address the specific use case of managing database passwords and best practices in doing so.

Let us first quickly discuss the key features of Vault and how we’re going to leverage them in our solution.

  • Secret Storage: Vault can store secrets in memory (non-persistent) or use third party (persistent) store. In either case, secrets are encrypted before storing. In our case, we’ll use Hashicorp Consul for the persistent secret store.
  • Dynamic Secrets: Vault allows to create on-demand secrets. We will create passwords dynamically to connect to a database; in our case, this will be a MySQL database.
  • Leasing, Renewal and Revocation: Vault allows password rotation based on lease associated with them. In a case of intrusion, Vault allows key revocation for system lockdown. For our demonstration, we will renew the lease with every database connection. However, in production one can set the renewal period based on their security and organizational policies.

Now that we understand Vault features, let’s look at how we can use it in a scenario where a Java application wants to connect and read data from MySQL database. Before we start using Vault we need to set it up and configure it. This section is typically done by system administrators. However, with Vault there is one major difference: the almighty “unseal” process.

Vault Setup

Depending on your system download and install Vault (install instructions) and Consul (install instructions).

Follow the step by step instructions provided below for Vault setup. Vault init (step-3) is done only once when the server is started for the first time with a secret storage (Consul) that has never been used before.

The Vault init step in incredibly important and as a best practice, it must be done in presence of a few other stakeholders in your organization. Vault init steps outputs unseal keys and initial root token which should be distributed among stakeholders. Vault splits the root token into multiple unseal keys using “Shamir’s Secret Sharing” algorithm.

Once initialized, a quorum is needed to read configuration information from vault. The number of unseal keys needed to establish a quorum can be set by passing the parameter “key-shares” and the maximum number of unseal keys generated by Vault can be set by passing parameter “key-threshold”. So in our case, three out of five stakeholders are needed to form a quorum. The root token should be destroyed because, if needed, Vault can issue one-time-root token by using the unseal keys.

In essence by chunking the root token (aka root password) into multiple unseal keys and distributing those within multiple individuals we’re protecting against a single individual hijacking our corporate data.

MySQL Database Staging

We need to create a user in MySQL database which will be used by Vault to login and dynamically create users based on access policies and lease time we set in Vault. In our case, we’ll create user “vaultadmin”.

Vault Database (MySQL Secret Backend plugin) Setup

Secret backend help store and generate secrets dynamically. In our case we’ll use database secret backend and MySQL plugin to create database credentials dynamically based on configured access control policies.

Now that Vault is setup and configured, we need create a mechanism to let our application authenticate to Vault so that it is able to read the database credentials. For our purpose we will use token-based authentication:

Spring-Boot Application Setup

We should use the authentication token created above in the spring boot bootstrap.properties file.

The source code for application can be found at https://github.com/stelligent/MySQL-Vault. The source code uses spring cloud libraries, specifically artifacts “spring-cloud-starter-vault-config” and “Spring-cloud-vault-config-databases” to connect to Vault and MySQL database. 

Application to Database Authentication Workflow

vault-mysql-workflow

  1. Application uses token and role to authenticate to Vault.
  2. Vault validates the token and role against roles and policies stored in secret storage. If validated, the workflow continues; otherwise access is rejected.
  3. Vault secret/database backend connects to MySQL database using connection information from step 2 of Vault Database Setup.
  4. Vault secret/database issues SQL statements to create user in MySQL database based on configuration in step 3 of Vault Database Setup.
  5. Vault passes the username/password information to application. Application can then access the database until the lease time expires. You can set the lease to expire per your organization’s password rotation policy. Once a lease expires, the credentials won’t be valid and the application will need to retake steps 1 through 5.

Using Vault it’s possible to have applications use dynamically generated encrypted passwords with a key rotation policy which is good for security and compliance. Moreover Vault itself provides protection against single user accessing corporate secrets. Using Hashicorp Vault is much more secure  than passing passwords around in configuration files.

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

DevOps in AWS Radio: Serverless (Episode 8)

In this episode, Paul Duvall and Brian Jakovich cover recent DevOps in AWS news and speak with Mike Roberts and John Chapin from Symphonia about Serverless architectures, DevOps, and AWS.

Here are the show notes:

DevOps in AWS News

Episode Topics

  1. Pros and Cons of Serverless architectures
  2. Symphonia’s Serverless speciality
  3. How is DevOps and Continuous Delivery fit into Serverless
  4. Continuous Experimentation and Serverless architectures
  5. Types of applications or services are most suitable or not suitable for Serverless
  6. O’Reilly report: “What is Serverless?”
  7. Serverless architectures resources and people in the space
  8. Vendor lockin

Additional Resources

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Service discovery for microservices with mu

mu is a tool that makes it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this fourth post of the blog series focused on the mu tool, we will use mu to setup Consul for service discovery between multiple microservices.  

Why do I need service discovery?

One of the biggest benefits of a microservices architecture is that the services can be deployed independently of one another.  However, this presents a new challenge in that it becomes difficult for clients to know the list of containers to use when invoking the service.  Here are three different approaches to address this challenge:

  • Load balancer per microservice: Create a load balancer for every microservice and add/remove containers to the load balancer as deployments and scaling events occur.  The endpoint address of the load balancer is then shared with clients through some manual process.

cloudcraft - Microservices - multip.png

There are three concerns with this approach.  First, the endpoint address of the load balancer must never change or else all the clients will be broken and require updates to take the new endpoint address.  This can be addressed via DNS CNAME records, but still requires that the name chosen for the record must not change.  Second, there is the additional cost of a load balancer for every microservice.  Finally, there is additional latency introduced with adding a load balancer between each microservice invocation.

  • Shared load balancer: Create a load balancer that is shared by all microservices in an environment.  The load balancer must have rules for each microservice to route requests by URI patterns.

ms-architecture-3

The concern with this approach is that all traffic is now flowing through a single load balancer which can become a constraint in scaling the entire system.  Additionally, the load balancer becomes a shared resource amongst all the microservice teams, potentially impacting a team’s ability to operate independently of other teams.

  • Client load balancer: Load balancing from within the client is an approach in which the client has an awareness of all the containers in-service for a given microservice.  The client can then load balance between the containers when invoking the microservice.  This approach requires a system to provide service registration and service discovery.   

cloudcraft - mu-bananaservice-v3

The benefit with this approach is there are no longer load balancers between each microservice request so all the concerns with those prior approaches are addressed.  However, a new type microservice, an edge service, will need to be deployed to allow clients outside the microservice environment (that do not have access to service discovery) to invoke the service.

The preferred approach is the third approach which uses service discovery and client side load balancing within the microservice environment and edge routing with traditional load balancing for clients outside the microservice environment.  This approach provides the lowest latency and most loosely coupled solution for microservice invocation.

Let mu help!

The environment that mu creates for your microservice can manage the provisioning of Consul for service discovery and registration of your microservices.  Consul is a sort of phonebook for microservices.  It provides APIs for services to register their endpoints and for clients to lookup the endpoints.

Let’s demonstrate this by adding an additional milkshake service to the invoke the banana service from the first post.  Additionally, we will create a zuul router service to provide an edge service via Netflix’s Zuul.  Zuul is a proxy service that serves as the front door for all requests from outside the microservice environment.  Zuul will use Consul for service discovery to determine where best to route the incoming request.  Additionally, Zuul provides an excellent location to enforce policies such as authentication, authorization or logging on all incoming requests.

Enabling Consul and Edge Router

The first thing we will want to do is set up our edge router with Zuul.  This is just a matter of adding the @EnableZuulProxy and @EnableDiscoveryClient annotations to the Spring Boot application:

@SpringBootApplication
@EnableDiscoveryClient
@EnableZuulProxy
public class ZuulRouterApplication {

   public static void main(String[] args) {
     SpringApplication.run(ZuulRouterApplication.class, args);
   }
}

Zuul is configured via the application.yml file in src/main/resources.  For each service that we want exposed via the edge router, we add URI path patterns:

spring:
  application:
    name: zuul-router
zuul:
  routes:
    milkshake-service:
      path: /milkshakes/**
      stripPrefix: false
    banana-service:
      path: /bananas/**
      stripPrefix: false

In order to enable Consul in your environment, you need to update the environment definition in the mu.yml file.  Additionally, you need to configure Spring Cloud Consul to connect to the docker host ip address for service discovery.  We will also want to configure Spring Cloud to not register with Consul, since mu will already configure the Registrator agent on your ECS container instances:

 environments:
 - name: dev
   cluster:
     maxSize: 5
   discovery:
     provider: consul
 - name: production

service:
  name: zuul-router
  port: 8080
  pathPatterns:
  - /*
  environment:
    SPRING_CLOUD_CONSUL_HOST: 172.17.0.1
    SPRING_CLOUD_CONSUL_DISCOVERY_REGISTER: 'false'
  pipeline:
    source:
      provider: GitHub
      repo: cplee/zuul-router
    build:
      image: aws/codebuild/java:openjdk-8

Create Milkshake Service

Now we can create a new service to manage the creation of milkshakes.  The service looks very similar to the banana service, with the exception of declaring a Spring RestTemplate annotated with @LoadBalanced to enable client side loadbalancing via Ribbon.

 

@SpringBootApplication
@EnableDiscoveryClient
public class MilkshakeApplication {

  @LoadBalanced
  @Bean
  RestTemplate restTemplate(){
     return new RestTemplate();
  }
}

Now we can use the RestTemplate to make calls directly to the banana service.  Ribbon will do a lookup in Consul for a service named banana-service and replace it in the URL with one of the container’s IP and port:

@Component
public class BananaProvider implements FlavorProvider {

  @Autowired
  private RestTemplate restTemplate;

  private List<Map<String,Object>> getAll() {
    ParameterizedTypeReference<List<Map<String, Object>>> typeRef =
            new ParameterizedTypeReference<List<Map<String, Object>>>() {};

    ResponseEntity<List<Map<String, Object>>> exchange =
            this.restTemplate.exchange("http://banana-service/bananas",HttpMethod.GET,null, typeRef);

    return exchange.getBody();
  } 

Try it out!

After we have deployed all three services, we can use mu to confirm that all are running as expected.

~ ❯❯❯ mu env show dev                                                                                                                                                                                                       

Environment:    dev
Cluster Stack:  mu-cluster-dev (UPDATE_COMPLETE)
VPC Stack:      mu-vpc-dev (UPDATE_COMPLETE)
Bastion Host:   35.164.117.25
Base URL:       http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com

Container Instances:
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
|    EC2 INSTANCE     |   TYPE   |     AMI      |     AZ     | CONNECTED | STATUS | # TASKS | CPU AVAIL | MEM AVAIL |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+
| i-08e3edc8c644f0534 | t2.micro | ami-62d35c02 | us-west-2b | true      | ACTIVE |       3 |       604 |       139 |
| i-05bc14a67e53889e1 | t2.micro | ami-62d35c02 | us-west-2a | true      | ACTIVE |       3 |       604 |       139 |
| i-0b56a0d9572531e9e | t2.micro | ami-62d35c02 | us-west-2c | true      | ACTIVE |       3 |       604 |       139 |
| i-05b2188a5c575fbeb | t2.micro | ami-62d35c02 | us-west-2b | true      | ACTIVE |       1 |       624 |       739 |
+---------------------+----------+--------------+------------+-----------+--------+---------+-----------+-----------+

Services:
+-------------------+---------------------------+------------------+---------------------+
|      SERVICE      |         IMAGE             |      STATUS      |     LAST UPDATE     |
+-------------------+---------------------------+------------------+---------------------+
| milkshake-service | milkshake-service:9e4bcd9 | CREATE_COMPLETE  | 2017-05-12 11:33:05 |
| zuul-router       | zuul-router:3d4795c       | UPDATE_COMPLETE  | 2017-05-12 12:09:47 | 
| banana-service    | banana-service:3b62124    | UPDATE_COMPLETE  | 2017-05-12 11:32:55 |
+-------------------+---------------------------+------------------+---------------------+

We can then use curl to get a list of all the bananas available via the banana-service:

curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq
[
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/9"
      }
    ]
  }
]

Next we try to create a milkshake using the milkshake-service:

~ ❯❯❯ curl -s -d "{}" -H "Content-Type: application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/milkshakes\?flavor\=Banana | jq                                                                         
{
  "timestamp": "2017-05-15T19:12:56.640+0000",
  "status": 500,
  "error": "Internal Server Error",
  "exception": "org.springframework.web.client.HttpClientErrorException",
  "message": "429 Not enough bananas to make the shake.",
  "path": "/milkshakes"
}

Looks like there aren’t enough bananas to create a milkshake.  Let’s create another one:

~ ❯❯❯ curl -s -d "{}" -H "Content-Type: application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas

~ ❯❯❯ curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq                                                                                                                         
[
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/9"
      }
    ]
  },
  {
    "pickedAt": null,
    "peeled": null,
    "links": [
      {
        "rel": "self",
        "href": "http://mu-cl-ecsel-144kxqmiry9wi-1411768500.us-west-2.elb.amazonaws.com/bananas/10"
      }
    ]
  }
]

Now let’s try again creating a milkshake:

~ ❯❯❯ curl -s -d "{}" -H "application/json" http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/milkshakes\?flavor\=Banana | jq                                                                      
{
  "id": 3,
  "flavor": "Banana"
}

This time it worked, and if we query the list of bananas again, we see that 2 have been deleted for the milkshake:

~ ❯❯❯ curl -s http://mu-cl-EcsEl-144KXQMIRY9WI-1411768500.us-west-2.elb.amazonaws.com/bananas | jq                                                                                                                        
[]

Conclusion

Decomposing a monolithic application into microservices presents an interesting challenge in enabling services to invoke one another while still keeping the services loosely coupled.  Using a client side load balancer like Ribbon along with a service discovery tool like Consul provide an excellent solution to this challenge.  As demonstrated in this post, mu makes it simple to enable service discovery in your microservice environment to help achieve this solution.  Head over to stelligent/mu on GitHub and get started!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

Cloud Custodian Cleans Up Your Cloud Clutter

AWS allows you to build enormous and complex cloud infrastructures in a matter of hours. With the ability to create resources so easily, sometimes it can be hard to manage all those resources. If only there were a simple but powerful tool that could manage it all. Cloud Custodian (a.k.a C7N) is a Python CLI tool that gives you powerful account management capabilities with a simple config file. Cloud Custodian can help you manage your AWS account using a simple policy config file and time-based or event-based Lambdas. The config files (YAML formatted) allow you to define policies for everything from tag compliance to backups. Define policies for a wide variety of management activities, including garbage collection to encryption.

Cloud computing has made creating and managing web resources insanely easy, quite possibly too easy. You can now spin up quite a few computing, database, and storage resources with the click of a button or the stroke of a return key. However, if you use a company account, you likely spin up those resources often for demonstration and testing purposes, without considering the cost or clutter you might be creating along with it. This was “the problem” at Capital One when they created this very powerful tool for managing the cleanup of your superfluous cloud resources. Capital One started developing Cloud Custodian in July 2015 and open-sourced the tool in April 2016.

Cloud Custodian’s feature-set has grown exponentially with it’s popularity because they’re very good about responding to feature requests. It’s now grown to the point where there’s not much in the AWS world you can’t do with it. Here’s a short list of some things you might be surprised it can do:

  • Encryption
  • Backups
  • Garbage Collection
  • Unused Resources
  • Off-hours
  • Tag Compliance
  • SG Compliance

Odds are though, you’re considering Cloud Custodian for it’s namesake: cleaning up your AWS account; resource/cost management during off-hours; and overall garbage collection. True to it’s name, this is where Cloud Custodian excels. With a relatively simple configuration file you can tidy and trim your AWS account and keep it that way as you grow your business.

Here’s a very basic example custodian.yml file that stops all EC2 instances tagged with Custodian:

policies:
  - name: stop-instances
    resource: ec2
    filters:
      - "tag:Custodian": present
    actions:
      - stop

Cloud Custodian is great for mid to large sized companies that give a large number of their employees full access to a company AWS account. Naturally, their account quickly becomes cluttered with dozens of CloudFormation stacks, VPCs, old test instances, and Lambda functions. Here at Stelligent we have an AWS Labs account for exploring and testing in AWS. We use Cloud Custodian to clean up old testing resources based on age and resource tags. 

Your Cloud Custodian Strategy

singlenodedeploy.pngSource

As of today, there’s not much Custodian can’t do in your AWS account so it’s good to explore what Cloud Custodian can do for you before deciding on your overall strategy. Here are four common use-cases:

  • Automatic clean-up: Using the mode property, you can run actions in response to a variety of CloudWatch EventsRead more
  • Monitoring your environment: This is one of my favorite features. Custodian generates CloudWatch metrics by default so it’s easy to throw together dashboards that give you full visibility into what is being managed by Custodian and what isn’t. It’s hard to get good visibility in a vast system like AWS. Read more
  • Stopping Resources during Off-hours: Custodian makes it very easy to set up Off-hours for your resources based on tags. Below is an example. Read more

    policies:
      - name: offhours-stop
        resource: ec2
        filters:
          - type: offhour
            tag: downtime
            onhour: 8
            offhour: 20
        actions:
          - stop
  • Tag-compliance: One of the most common use cases for Custodian is tag-compliance. You can manage tag-compliance policies for your entire account in a single config file. You can even check it into version control. And, if you’re really ambitious, you can create a pipeline to watch your version control system that runs custodian for you so you don’t have to activate virtualenv on your personal machine every time you want to make a change. Read more

Prereqs and Pro Tips

Cloud Custodian is very well documented, so if you’re excited to start taking out the digital trash in your AWS account there’s no better place to start than their website and documentation. There are a few things to keep in mind before diving head-first into the cloud equivalent of the custodial arts:

Prereqs

At Stelligent we’re a tiny bit obsessed with one-command solutions. I have to admit, I cringed a little when I saw that Cloud Custodian took 3 or more commands to install and run. In the Getting Started section of the docs, Custodian requires you to have python, pip and virtualenv installed before you can even install Cloud Custodian. Then, once you activate virtualenv to install and run it the first, you’ll need to re-activate the virtualenv every time you want to run it in the future. That’s why I recommend using a Pipeline or CloudFormation template which brings us directly to our pro tips:

Pro Tip #1 – Minimalist Custodian

The easiest way to get started cleaning up your AWS account with Custodian is to go through your account and tag everything you want to keep with something like “NoCustodian”. Then, set

policies:
  - name: stop-instances
    resource: ec2
    filters:
      - "tag:Custodian": present
    actions:
      - stop

Click the button below to launch an example CloudFormation Stack that boots an EC2 instance and then uses Custodian to stop the instance.

Pro Tip #2 – Don’t piss off your co-workers

The first thing you’ll be tempted to do when implementing Cloud Custodian is terminate all the old and un-used resources in your account. Just be sure all the relevant parties in your company know what you’ll be terminating and when.

Pro Tip #3 – Use a Pipeline

Setup a CodePipeline that allows you to keep your custodian.yml file in source control and re-run it with CodeBuild every time you commit a change.

In Summary

If you need better visibility and automated management of your AWS account, Cloud Custodian has lots of helpful features that are easy to manage in a single config file. If you aren’t already a python developer, I recommend setting up a CloudFormation template or Automated Pipeline to manage changes to your. You can use the launch button in Pro Tip #1 to see an example of Custodian in a CloudFormation template. Keep a look out for a future blog post with a detailed example of a fully-automated Custodian Pipeline.

Links
Custodian Website
Custodian Docs
Custodian GitHub

Videos
AWS This Is My Architecture: Cloud Custodian
Cloud Custodian @ AWS re:Invent
Cloud Custodian @ Serverlessconf


Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

Microservice databases with mu

mu is a tool that makes it simple and cost-efficient for developers to use AWS as the platform for running their microservices.  In this third post of the blog series focused on the mu tool, we will use mu to manage microservice databases in the pipeline we built in the first post.  

Why should my microservice manage the database?

As discussed in prior posts, adopting a microservice architecture can increase a team’s ability to deliver software faster through decoupling and team autonomy.  By decomposing an application into microservices and then giving teams complete ownership of their microservices, the teams can then make decisions and implement changes independent of other teams and their microservices.

Unless the same approach is taken to decompose the databases that support the microservices, the benefits of microservices will be limited by the cross team dependencies on shared databases. When your microservices share a database then in effect you’ve used the database as an API between the services.  This type of architecture causes tight coupling between services and likely will require regression testing and even deployment of multiple services at the same time.

Martin Fowler, in his post titled Microservices, says “Microservices prefer letting each service manage its own database.”  By decomposing all the way down into the database you can realize the benefits of agility that microservices has to offer.

decentralised-data
Source: https://martinfowler.com/articles/microservices.html

Let mu help!

blog1The continuous delivery pipeline that mu creates for your microservice can manage the provisioning of a database.  Additionally, the details about the database can be injected into your service as environment variables.

Let’s demonstrate this by adding a database to the microservice pipeline we created in the first post for the banana service.

Define the database

Previously, the banana service was using an embedded H2 database.  This won’t work in a production environment so we need an RDS database instance that the microservice can use.  Adding a database for a service with mu is as simple as adding a couple lines to your mu.yml file:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana

By default, this will create an RDS database instance of size db.t2.small with the aurora engine.  Next we need to reference the database from our microservice.  We can pass the database URL and credentials via environment variables:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana

  environment:
    SPRING_DATASOURCE_USERNAME: ${DatabaseMasterUsername}
    SPRING_DATASOURCE_PASSWORD: ${DatabaseMasterPassword}
    SPRING_DATASOURCE_URL: jdbc:mysql://${DatabaseEndpointAddress}:${DatabaseEndpointPort}/${DatabaseName}

This approach does have the disadvantage of passing database credentials as environment variables.  This presents a security issue as any IAM user/role with access to ECS task API would be able to discover the credentials.

AWS has recently announced IAM database authentication that can be utilized to obtain temporary database credentials from the microservice via an AWS API call.  Although we will save the details for a future blog post, for now it’s worth mentioning that mu can configure the database for IAM database authentication to work around this issue of passing credentials as environment variables.  This would be accomplished with a mu.yml like this:

service:
  name: banana-service
  port: 8080
  pathPatterns:
  - /bananas
  database:
    name: banana
    instanceClass: db.t2.medium
    iamAuthentication: true

  environment:
       SPRING_DATASOURCE_URL: jdbc:mysql://${DatabaseEndpointAddress}:${DatabaseEndpointPort}/${DatabaseName}

The configuration of the tables and the data in the database is managed with Liquibase. When the service is started, Liquibase creates/updates the database tables and data. This is accomplished by creating the a file named db.changelog-master.yaml  in src/main/resources/db/changelog/

Now we can commit and push our changes to cause a new run of the pipeline to occur:

$ git add --all && git commit -m "add database" && git push

We see our pipeline is green, so we have confidence that the new database is working properly with the microservice.

Conclusion

Realizing the benefits of microservices requires decomposing not just the application, but also the databases that support it.  As demonstrated in this post, mu makes it simple to manage your database and wire them up to your microservices.  The goal is that mu empowers you to implement microservice best practices in your application.

In the upcoming posts in this blog series, we will look into:

  • Service Discovery – use mu to enable service discovery via `Consul` to allow for inter-service communication
  • Additional Use Cases – deploy applications other than microservices via mu, like a wordpress stack

Until then, head over to stelligent/mu on GitHub and get started!

Additional Resources

Did you find this post interesting? Are you passionate about working with the latest AWS technologies? If so, Stelligent is hiring and we would love to hear from you!

AWS CodeStar – Quickly develop, build, and deploy applications on AWS

AWS CodeStar is a new service that changes the way development teams deliver software in AWS. CodeStar makes the process of setting up software applications for continuous delivery easier to manage through integrated authorization and access management, centralized member collaboration, and automated environment provisioning.

adh-team-whowhat1
(1) “Working with AWS CodeStar Teams.” Working with AWS CodeStar Teams – AWS CodeStar. Amazon Web Services, 2017. Web. 01 May 2017. – http://docs.aws.amazon.com/codestar/latest/userguide/working-with-teams.html

Through the use of CodeStar you can now automatically create entire environments for your application and all of its associated AWS resources. Furthermore, CodeStar is great for groups who are engaging in brand new start up applications and projects. Because of the simplicity of CodeStar, development teams can create efficient software workflows that will be able to build, test, and release software on AWS much faster than before. Some of the benefits of CodeStar include:

  • Automatic Provisioning of Resources: When you create a project through CodeStar, AWS will automatically provision a handful of the underlying resources that will be part of your software’s environment through the use of AWS CloudFormation. Some of these resources could include AWS Elastic Beanstalk, AWS EC2 instances, AWS S3 Buckets, and an AWS CodeCommit repository. One of the most significant resources that CodeStar creates is a continuous delivery pipeline. This pipeline is built using AWS CodePipeline and initially contains two stages: a Source (Commit) stage and an Application (Deploy) stage. If you need additional stages, you can modify your CodePipeline pipeline accordingly.
  • Pre-built Code Templates: When you begin the process of creating a project with CodeStar you are given the option to choose from many pre-built code templates used to build applications that will run on AWS Elastic Beanstalk, AWS EC2, or AWS Lambda. These pre-built templates come with already-setup sample code applications that are ready to be modified and as the user you can choose between five programming languages to build your software in. These five languages include Ruby, Python, PHP, Java, and Javascript. After you choose your programming language you then have the option to choose from three ways of editing your project code which include the use of Visual Studio, Eclipse, or Command Line Tools.

For the remainder of this blog I will demonstrate how to setup and build a CodeStar project using a Ruby on Rails template and will deploy the sample application on an AWS EC2 instance.

CodeStar Project with Ruby on Rails

Creating your CodeStar Project

  1. The first thing you will need to do to create your CodeStar project is to log into your AWS console, go to the CodeStar console, and select “Create New Project”.
  2. You will be directed to a page that displays the many variety of project templates for you to choose from. The types of  applications this service supports range from templates ready to deploy on:
    1. AWS Elastic Beanstalk (Automated management of capacity and load balancing), Amazon EC2 with AWS CodeDeploy (Flexible deployment onto any type of instance), and AWS Lambda (Lambda is serverless technology and uses AWS CodeBuild to build your artifacts automatically)
      1. Side note: As of now it is not possible to create a CodeStar project via a CloudFormation template. It is also not possible to start a CodeStar project with your already-built application or to use GitHub as your code repository. The only way to achieve this would be to modify the Source stage of the CodePipeline that gets created for you once it is complete.
    2. For my example I am going to choose the “Ruby on Rails Web Application” that will be running on an Amazon EC2 instance.

Screen Shot 2017-04-25 at 5.16.23 PM

3. You will then be prompted to enter in the name for your project (Project name) and will be able to edit the Project ID as well. You can also choose whether or not to allow AWS CodeStar to administer AWS resources on your behalf by either checking/unchecking the box on the bottom of the page. If you chose a template that has a project running on EC2 (such as my example) then you will be able to edit the EC2 configuration as well. This includes choosing:

  1. Your own VPC (you have the choice of being assigned a default VPC and Subnet or choosing an existing one. You cannot create a VPC here.)
    1. Side note: To create an AWS VPC and a subnet you must go into the Networking & Content Delivery Console: VPC section and create them.
  2. Your Subnet to deploy your instance into
  3. The instance type (I chose t2.micro) 

Screen Shot 2017-04-25 at 5.40.22 PM

4. Select your AWS EC2 Keypair and select “Create Project”

5. You will then be able to choose how you want to edit your project code from the following three choices (Visual Studio, Eclipse, or Command line tools). For my example I chose Command line tools. At the bottom of the page will also be the code repository URL for your project and you can choose an access method between SSH and HTTPS.

6. The next page will be the Connect to your tools page which is where you’ll select your local machine’s operating system (macOS, Windows, Linux) and your connection method (HTTPS, SSH).

  1. For HTTPS connection: If you haven’t done so already you will need to install a Git client on your local machine (there is a link to install it in Step 1). You will also need to generate your AWS IAM user Git credentials by clicking the “here” link in Step 2. Once you have completed the first two steps you can then clone your repository onto your local machine by copying the Git command in Step 3 and pasting it to whatever directory you would like in your terminal.  Once you  have cloned the git repository into  your terminal you will be prompted for your user name and password which will be the Git credentials that you generate for your IAM user. Hit the “Skip” button below to continue onto your management dashboard
  2. For SSH connection: If you haven’t done so already you will need to install a Git client on your local machine (there is a link to install it in Step 1). You will then need to register your SSH Public Key (for help on how to do this please go to this link here located in the instructions in Step 2). Once you have registered your SSH key you will need to go into your terminal into your ~/.ssh directory and create a file named “config”. Add the following lines into this file:
Host git-codecommit.*.amazonaws.com
User Your-IAM-SSH-Key-ID-Here
IdentityFile ~/.ssh/Your-Private-Key-File-Name-Here

Once you have saved the file, you will need to ensure it has the right permissions by running the following command in your ~/.ssh directory:

chmod 600 config

After you have followed these steps you can clone the project repository onto your local machine by copying and pasting the command located in Step 4.

As mentioned earlier in this article, when you go through the process of creating your CodeStar project, if you selected the box that “allows AWS to administer resources on your behalf” CodeStar creates a CloudFormation stack that automatically deploys the environment and resources for your application. Here is what the CloudFormation stack and its resources looks like if you chose to create the Ruby on Rails application on an EC2 instance:

Screen Shot 2017-05-01 at 5.14.33 PM

Pre-configured Management Dashboard

After you have created your CodeStar project  you will be given a pre-configured centralized management dashboard from which you will be able to view a variety of events that are going on with your application project. Things that are viewable in the default dashboard include your:

  • Application’s resource activity metrics via AWS CloudWatch
  • Code commits history
  • Your application’s endpoint (Outlined in red: my example contains a public EC2 DNS endpoint)
  • A visual of your AWS CodePipeline in which you can see real time progress of your software’s continuous delivery cycle.
  • You also have the option to add the Atlassian Jira Software extension to your dashboard so that you can directly track your application project’s issues and its collaborator’s tasks

Screen Shot 2017-04-27 at 3.40.31 PM.png

From the dashboard you can Configure issue tracking which enables to integrate the Jira extension into your project for easy tracking. You are also able to setup your team members who will be given access to work on your project and determine which role they will have on it. You will just have to pick their IAM user name, choose whether remote access is allowed, and select the role for them between:

  • Viewer
  • Contributer
  • Owner

Start Modifying Your Rails Application

For this example I will be opening up my sample Rails application by going to the application endpoint link on the CodeStar dashboard. The first modification that I will be making will be to the opening “hello page” of the application. Here is what the opening page of the sample application looks like when I go to the application endpoint:

Screen Shot 2017-04-27 at 3.03.10 PM

Assuming that you have cloned the Git repository for your project onto your local machine, you can now start to modify your Rails application and make changes using your own text editor. For this example I am just going to remove the links on the home page (/app/views/hello_page/hello.html.erb) and change some of the wording. After making my slight changes to the “hello page” and saving it, I can just into my Git repository on my local machine’s terminal and proceed to type the following commands to push my most recent changes:

git status
  • This will show you what changes have been made to your project

Screen Shot 2017-04-27 at 4.41.34 PM.png

git add app/views/hello_page/hello.html.erb
  • This will add all of the changes that are ready to be made to the hello page
git commit -m “[your message about the changes that have been made]”
git push
  • This will push your newly modified project into your code pipeline and will automatically trigger the continuous deployment cycle.

Here is what will happen to your CodePipeline on your dashboard when you “git push” your changes:

Screen Shot 2017-04-27 at 4.37.40 PM.png

Once the pipeline has has succeeded through the Application stage, refresh your browser page with your application’s endpoint and see the new changes that have been made to your Rails application:

Screen Shot 2017-04-27 at 4.34.47 PM.png

From here on out you have a full Ruby on Rails application framework running on an Amazon EC2 instance where you can start to build/modify your own custom application. For more information about what you can do with your new Rails application please refer to the README that can be accessed by clicking on the “Code” box on the left side of you CodeStar dashboard.

Additional Resources

Here are some additional resources you might find useful:

Summary

In this post we talked about how to use the newly added AWS CodeStar service and discovered the benefits that it can offer to a variety of users. You learned about the different types of projects that CodeStar can create and how to easily interact with those projects upon their creation.

Let us know if you have any comments or questions @stelligent or @TreyMcElhattan

Stelligent is hiring! Do you enjoy working on complex problems like figuring out ways to automate all the things as part of a deployment pipeline? Do you believe in the “one-button everything” mantra? If your skills and interests lie at the intersection of DevOps automation and the AWS cloud, check out the careers page on our website.

DevOps in AWS Radio: AWS CodeStar (Episode 7)

In this episode, Paul Duvall and Brian Jakovich are joined by Trey McElhattan from Stelligent to cover recent DevOps in AWS news and speak about AWS CodeStar.

Stay tuned for Trey’s blog post on his experiences in using AWS CodeStar!

Here are the show notes:

DevOps in AWS News

Episode Topics

  1. What is AWS CodeStar? What are its key features?
  2. Which AWS tools does it use?
  3. What are the alternatives to using AWS CodeStar?
  4. If you’d like to switch one of the tools that CodeStar uses, how would you do this (e.g. use a different monitoring tools than CloudWatch)?
  5. Which are supported and how: SDKs, CLI, Console, CloudFormation, etc.?
  6. What’s the pricing model for CodeStar?

Additional Resources

  1. New- Introducing AWS CodeStar – Quickly Develop, Build, and Deploy Applications on AWS
  2. AWS CodeStar Product Details
  3. AWS CodeStar Main Page

About DevOps in AWS Radio

On DevOps in AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery in the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps in AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

Docker lifecycle automation and testing with Ruby in AWS

My friend and colleague, Stephen Goncher and I got to spend some real time recently implementing a continuous integration and continuous delivery pipeline using only Ruby. We were successful in developing a new module in our pipeline gem that handles many of the docker engine needs without having to skimp out on testing and code quality. By using the swipely/docker-api gem we were able to write well-tested, DRY pipeline code that can be leveraged by future users of our environment with confidence.

Our environment included the use of Amazon Web Service’s Elastic Container Registry (ECR) which proved to be more challenging to implement than we originally considered. The purpose of this post is to help others implement some basic docker functionality in their pipelines more quickly than we did. In addition, we will showcase some of the techniques we used to test our docker images.

Quick look at the SDK

It’s important that you make the connection in your mind now that each interface in the docker gem has a corresponding API call in the Docker Engine. With that said, it would be wise to take a quick stroll through the documentation and API reference before writing any code. There’s a few methods, such as Docker.authenticate! that will require some advanced configuration that is vaguely documented and you’ll need to combine all the sources to piece them together.

For those of you who are example driven learners, be sure to check out an example project on github that we put together to demonstrate these concepts.

Authenticating with ECR

We’re going to save you the trouble of fumbling through the various documentation by providing an example to authenticate with an Amazon ECR repository. The below example assumes you have already created a repository in AWS. You’ll also need to have an instance role attached to the machine you’re executing this snippet from or have your API key and secret configured.

Snippet 1. Using ruby to authenticate with Amazon ECR

require 'aws-sdk-core'
require 'base64'
require 'docker'

# AWS SDK ECR Client
ecr_client = Aws::ECR::Client.new

# Your AWS Account ID
aws_account_id = '1234567890'

# Grab your authentication token from AWS ECR
token = ecr_client.get_authorization_token(
 registry_ids: [aws_account_id]
).authorization_data.first

# Remove the https:// to authenticate
ecr_repo_url = token.proxy_endpoint.gsub('https://', '')

# Authorization token is given as username:password, split it out
user_pass_token = Base64.decode64(token.authorization_token).split(':')

# Call the authenticate method with the options
Docker.authenticate!('username' => user_pass_token.first,
                     'password' => user_pass_token.last,
                     'email' => 'none',
                     'serveraddress' => ecr_repo_url)

Pro Tip #1: The docker-api gem stores the authentication credentials in memory at runtime (see: Docker.creds.) If you’re using something like a Jenkins CI server to execute your pipeline in separate stages, you’ll need to re-authenticate at each step. Here’s an example of how the sample project accomplishes this.

Snippet 2. Using ruby to logout

Docker.creds = nil

Pro Tip #2: You’ll need to logout or deauthenticate from ECR in order to pull images from the public/default docker.io repository.

Build, tag and push

The basic functions of the docker-api gem are pretty straightforward to use with a vanilla configuration. When you tie in a remote repository such as Amazon ECR there can be some gotcha’s. Here are some more examples of the various stages of a docker image you’ll encounter with your pipeline. Now that you’re authenticated, let’s get to doing some real work!

The following snippets assume you’re authenticated already.

Snippet 3. The complete lifecycle of a basic Docker image

# Build our Docker image with a custom context
image = Docker::Image.build_from_dir(
 '/path/to/project',
 { 'dockerfile' => 'ubuntu/Dockerfile' }
)

# Tag our image with the complete endpoint and repo name
image.tag(repo: 'example.ecr.amazonaws.com/stelligent-example',
          tag: 'latest')

# Push only our tag to ECR
image.push(nil, tag: 'latest')

Integration Tests for your Docker Images

Here at Stelligent, we know that the key to software quality is writing tests. It’s part of our core DNA. So it’s no surprise we have some method to writing integration tests for our docker images. The solution will use Serverspec to launch the intermediate container, execute the tests and compile the results while we use the docker-api gem we’ve been learning to build the image and provide the image id into the context.

Snippet 5. Writing a serverspec test for a Docker Image

require 'serverspec'

describe 'Dockerfile' do
 before(:all) do
   set :os, family: :debian
   set :backend, :docker
   set :docker_image, '123456789' # image id
 end

 describe file('/usr/local/apache2/htdocs/index.html') do
   it { should exist }
   it { should be_file }
   it { should be_mode 644 }
   it { should contain('Automation for the People') }
 end

 describe port(80) do
   it { should be_listening }
 end
end

Snippet 6. Executing your test

$ rspec spec/integration/docker/stelligent-example_spec.rb

You’re Done!

Using a tool like the swipely/docker-api to drive your automation scripts is a huge step forward in providing fast, reliable feedback in your Docker pipelines compared to writing bash. By doing so, you’re able to write unit and integration tests for your pipeline code to ensure both your infrastructure and your application is well-tested. Not only can you unit test your docker-api implementation, but you can also leverage the AWS SDK’s ability to stub responses and take your testing a step further when implementing with Amazon Elastic Container Repository.

See it in Action

We’ve put together a short (approx. 5 minute) demo of using these tools. Check it out from github and take a test drive through the life cycle of Docker within AWS.


Working with cool tools like Docker and its open source SDKs is only part of the exciting work we do here at Stelligent. To take your pipeline a step further from here, you should check out mu — a microservices platform that will deploy your newly tested docker containers. You can take that epic experience a step further and become a Stelligentsia because we are hiring innovative and passionate engineers like you!

Migrating from ServerSpec to InSpec

Key Benefits

  • OS and CM agnostic
  • InSpec runtime is faster than ServerSpec
  • DRY – Componentization of test suites using shared InSpec Profiles
  • Removes downloading of Gem files on every test execution
  • Quick to Change from ServerSpec to InSpec
  • CLI binary available to run tests from CI/CD
  • Allows usage of community test suites
  • Allows usage of InSpec Profiles from a Chef Compliance server
  • Allows compliance reporting to Chef Automate Visibility server

ServerSpec has been a great automated integration test framework for years, but it has always had some limitations. Such as, when running with Test Kitchen it downloads several gems every time tests are run and the inability to easily have shared test suites that could be used by multiple cookbooks or delivery solutions. Chef took on the challenge of extending ServerSpec to solve many of these inadequacies. Chef created a project named InSpec. It supports the same fundamental syntax of ServerSpec and adds extended capabilities.

InSpec does not require downloading gems during testing. ChefDK comes with all you need which the minimum is the inspec Rubygem. It also has the Test Kitchen extension gem named kitchen-inspec. Just this along makes for faster testing. Also, it helps for environments that may not have internet access.

InSpec also brings shared capabilities by way of what they’ve called InSpec Profiles. InSpec Profiles can be simple or as complex as you would like to make. With Profiles, you can also pass arguments. For example, you can have a shared InSpec Profile hosted in its own Git repo that tests for the version of Chef Client installed. You can set a default Chef Client version to check for, but allow an argument to override the default. So, you can pass in an argument string to the test profile with the version you expect and want it to check for. I have a simple example of that here.

When using InSpec with Test Kitchen it is possible to call multiple InSpec Profiles remote and/or local. For example, it’s possible to have said a set of standard security/compliance profiles, other cookbooks that are wrapped, baseline profile (bootstrapping) and then have a local profile that checks the specific cookbook configurations.

Here’s an example Kitchen config from bonusbits_mediawiki_nginx cookbook. This is at root level of a test suite.

    verifier:
      inspec_tests:
        - name: bootstrap
          git: https://github.com/bonusbits/inspec_bootstrap.git
        - name: bonusbits_base
          git: https://github.com/bonusbits/inspec_bonusbits_base.git
        - path: test/inspec/profiles/bonusbits_web/
      attributes:
        chef_version: '12.19.36'

It’s fairly easy and quick to migrate ServerSpec tests to InSpec tests. In all actuality, you could migrate some local ServerSpec tests to InSpec in a matter of minutes.

To demonstrate how quickly we can convert ServerSpec to InSpec I have created a short series of videos that walk through this process. Companions to the videos are Github reference branches for each part and wiki articles that I have linked below:

ServerSpec to InSpec – Part 1
Creating a ServerSpec Tested Chef Cookbook


Github Branch – Part 1
Wiki Article – Part 1

ServerSpec to InSpec – Part 2
Converting Local ServerSpec to Local InSpec Tests


Github Branch – Part 2
Wiki Article – Part 2

ServerSpec to InSpec – Part 3
Converting Local InSpec to Shared Remote Tests


Github Branch – Part 3
Github Example Shared InSpec Profile
Wiki Article

Complete Walkthrough Video Playlist

Spoiler!

Ok, so if you don’t have time to watch videos or read the wiki articles. Here’s the basic conversion for local ServerSpec to local InSpec tests. This is from Part 2. Part 3 goes into the best part about InSpec which is remote/shared tests.

Before

test
└── integration
 ├── default
 │   └── serverspec
 │      ├── nginx_spec.rb
 │      └── phpfpm_spec.rb
 └── helpers
    └── serverspec
       └── spec_helper.rb
test/integration/default/serverspec/nginx_spec.rb
require 'spec_helper'

describe 'Nginx' do
  it 'nginx installed' do
    expect(package('nginx')).to be_installed
  end

  it 'nginx service' do
    expect(service('nginx')).to be_enabled
    expect(service('nginx')).to be_running
  end
end
test/integration/default/serverspec/phpfpm_spec.rb
require 'spec_helper'

describe 'Php FPM' do
  it 'php-fpm installed' do
    expect(package('php70-fpm')).to be_installed
  end

  it 'php-fpm service' do
    expect(service('php-fpm-7.0')).to be_enabled
    expect(service('php-fpm-7.0')).to be_running
  end

  it 'nginx owns /var/log/php-fpm' do
    expect(file('/var/log/php-fpm')).to be_owned_by('nginx')
    expect(file('/var/log/php-fpm/7.0')).to be_owned_by('nginx')
  end

  it 'nginx owns /var/lib/php/7.0' do
    expect(file('/var/lib/php/7.0')).to be_grouped_into('nginx')
  end
end
test/integration/helpers/serverspec/spec_helper.rb
# Encoding: utf-8
require 'serverspec'

if (/cygwin|mswin|mingw|bccwin|wince|emx/ =~ RUBY_PLATFORM).nil?
  set :backend, :exec
  set :path, '/sbin:/usr/local/sbin:/bin:/usr/bin:$PATH'
else
  set :backend, :cmd
  set :os, family: 'windows'
end

After

test
└── integration
    └── default
        └── inspec
            ├── nginx_spec.rb
            └── phpfpm_spec.rb
test/integration/default/serverspec/nginx_spec.rb
describe 'Nginx' do
  it 'nginx installed' do
    expect(package('nginx')).to be_installed
  end

  it 'nginx service' do
    expect(service('nginx')).to be_enabled
    expect(service('nginx')).to be_running
  end
end
test/integration/default/serverspec/phpfpm_spec.rb
describe 'Php FPM' do
  it 'php-fpm installed' do
    expect(package('php70-fpm')).to be_installed
  end

  it 'php-fpm service' do
    expect(service('php-fpm-7.0')).to be_enabled
    expect(service('php-fpm-7.0')).to be_running
  end

  it 'nginx owns /var/log/php-fpm' do
    expect(file('/var/log/php-fpm')).to be_owned_by('nginx')
    expect(file('/var/log/php-fpm/7.0')).to be_owned_by('nginx')
  end

  it 'nginx owns /var/lib/php/7.0' do
    expect(file('/var/lib/php/7.0')).to be_grouped_into('nginx')
  end
end
.kitchen.yml
verifier:
  name: inspec

Resources